agent support
14 Topicsđ AI Toolkit for VS Code â February 2026 Update
February brings a major milestone for AI Toolkit. Version 0.30.0 is packed with new capabilities that make agent development more discoverable, debuggable, and production-readyâfrom a brand-new Tool Catalog, to an end-to-end Agent Inspector, to treating evaluations as first-class tests. đ§ New in v0.30.0 đ§° Tool Catalog: One place to discover and manage agent tools The new Tool Catalog is a centralized hub for discovering, configuring, and integrating tools into your AI agents. Instead of juggling scattered configs and definitions, you now get a unified experience for tool management: Browse, search, and filter tools from the public Foundry catalog and local stdio MCP servers Configure connection settings for each tool directly in VS Code Add tools to agents seamlessly via Agent Builder Manage the full tool lifecycle: add, update, or remove tools with confidence Why it matters: expanding your agentâs capabilities is now a few clicks awayâand stays manageable as your agent grows. đ”ïž Agent Inspector: Debug agents like real software The new Agent Inspector turns agent debugging into a first-class experience inside VS Code. Just press F5 and launch your agent with full debugger support. Key highlights: One-click F5 debugging with breakpoints, variable inspection, and step-through execution Copilot auto-configuration that scaffolds agent code, endpoints, and debugging setup Production-ready code generated using the Hosted Agent SDK, ready for Microsoft Foundry Real-time visualization of streaming responses, tool calls, and multi-agent workflows Quick code navigationâdouble-click workflow nodes to jump straight to source Unified experience combining chat and workflow visualization in one view Why it matters: agents are no longer black boxesâyou can see exactly whatâs happening, when, and why. đ§Ș Evaluation as Tests: Treat quality like code With Evaluation as Tests, agent quality checks now fit naturally into existing developer workflows. Whatâs new: Define evaluations as test cases using familiar pytest syntax and Eval Runner SDK annotations Run evaluations directly from VS Code Test Explorer, mixing and matching test cases Analyze results in a tabular view with Data Wrangler integration Submit evaluation definitions to run at scale in Microsoft Foundry Why it matters: evaluations are no longer ad-hoc scriptsâtheyâre versioned, repeatable, and CI-friendly. đ Improvements across the Toolkit đ§± Agent Builder Agent Builder received a major usability refresh: Redesigned layout for better navigation and focus Quick switcher to move between agents effortlessly Support for authoring, running, and saving Foundry prompt agents Add tools to Foundry prompt agents directly from the Tool Catalog or built-in tools New Inspire Me feature to help you get started when drafting agent instructions Numerous performance and stability improvements đ€ Model Catalog Added support for models using the OpenAI Response API, including gpt-5.2-codex General performance and reliability improvements đ§ Build Agent with GitHub Copilot New Workflow entry point to quickly generate multi-agent workflows with Copilot Ability to orchestrate workflows by selecting prompt agents from Foundry đ Conversion & Profiling Generate interactive playgrounds for history models Added Qualcomm GPU recipes Show resource usage for Phi Silica directly in Model Playground âš Wrapping up Version 0.30.0 is a big step forward for AI Toolkit. With better discoverability, real debugging, structured evaluation, and deeper Foundry integration, building AI agents in VS Code now feels much closer to building production software. As always, weâd love your feedbackâkeep it coming, and happy agent building! đAnnouncing Public Preview: AI Toolkit for GitHub Copilot Prompt-First Agent Development
This week at GitHub Universe, weâre announcing the Public Preview of the GitHub Copilot prompt-first agent development in the AI Toolkit for Visual Studio Code. With this release, building powerful AI agents is now simpler and faster - no need to wrestle with complex frameworks or orchestrators. Just start with natural language prompts and let GitHub Copilot guide you from concept to working agent code. Accelerate Agent Development in VS Code The AI Toolkit embeds agent development workflows directly into Visual Studio Code and GitHub Copilot, enabling you to transform ideas into production-ready agents within minutes. This unified experience empowers developers and product teams to: Select the best model for your agent scenario Build and orchestrate agents using Microsoft Agent Framework Trace agent behaviors Evaluate agent response quality Select the best model for your scenario Models are the foundation for building powerful agents. Using the AI Toolkit, you can already explore and experiment with a wide range of local and remote models. Copilot now recommends models tailored to your agentâs needs, helping you make informed choices quickly. Build and orchestrate agents Whether youâre creating a single agent or designing a multi-agent workflow, Copilot leverages the latest Microsoft Agent Framework to generate robust agent code. You can initiate agent creation with simple prompts and visualize workflows for greater clarity and control. Create a single agent using Copilot Create a multi-agent workflow using Copilot and visualize workflow execution Trace agent behaviors As agents become more sophisticated, understanding their actions is crucial. The AI Toolkit enables tracing via Copilot, collecting local traces and displaying detailed agent calls, all within VS Code. Evaluate agent response quality Copilot guides you through structured evaluation, recommending metrics and generating test datasets. Integrate evaluations into your CI/CD pipeline for continuous quality assurance and confident deployments. Get started and share feedback This release marks a significant step toward making AI agent development easier and more accessible in Visual Studio Code. Try out theâŻAI Toolkit for Visual Studio Code, share your thoughts, andâŻfile issues and suggest features on our GitHub repo. Thank you for being a part of this journey with us!AI Toolkit for VS Code October Update
We're thrilled to bring you the October update for the AI Toolkit for Visual Studio Code! This month marks another major milestone with version 0.24.0, introducing groundbreaking GitHub Copilot Tools Integration and additional user experience enhancements that make AI-powered development more seamless than ever. Let's dive into what's new! đ đ GitHub Copilot Tools Integration We are excited to announce the integration of GitHub Copilot Tools into AI Toolkit for VS Code. This integration empowers developers to build AI-powered applications more efficiently by leveraging Copilot's capabilities enhanced by AI Toolkit. đ€ AI Agent Code Generation Tool This powerful tool provides best practices, guidance, steps, and code samples on Microsoft Agent Framework for GitHub Copilot to better scaffold AI agent applications. Whether you're building your first agent or scaling complex multi-agent systems, this tool ensures you follow the latest best practices and patterns. đ AI Agent Evaluation Planner Tool Building great AI agents requires thorough evaluation. This tool guides users through the complete process of evaluating AI agents, including: Defining evaluation metrics - Establish clear success criteria for your agents Creating evaluation datasets - Generate comprehensive test datasets Analyzing results - Understand your agent's performance and areas for improvement The Evaluation Planner works seamlessly with two specialized sub-tools: đââïž Evaluation Agent Runner Tool This tool runs agents on provided datasets and collects results, making it easy to test your agents at scale across multiple scenarios and use cases. đ» Evaluation Code Generation Tool Get best practices, guidance, steps, and code samples on Azure AI Foundry Evaluation Framework for GitHub Copilot to better scaffold code for evaluating AI agents. đŻ Easy Access and Usage You can access these powerful tools in two convenient ways: Direct GitHub Copilot Integration: Simply enter prompts like: Create an AI agent using Microsoft Agent Framework to help users plan a trip to Paris. Evaluate the performance of my AI agent using Azure AI Foundry Evaluation Framework. AI Toolkit Tree View: For quick access, find these tools in the AI Toolkit Tree View UI under the section `Build Agent with GitHub Copilot`. âš Additional Enhancements đš Model Playground Improvements The user experience in Model Playground has been significantly enhanced: Resizable Divider: The divider between chat output and model settings is now resizable, allowing you to customize your workspace layout for better usability and productivity. đ Model Catalog Updates We've unified and streamlined the model discovery experience: Unified Local Models: The ONNX models section in the Model Catalog has been merged with Foundry Local Models on macOS and Windows platforms, providing a unified experience for discovering and selecting local models. Simplified Navigation: Find all your local model options in one place, making it easier to compare and select the right model for your use case. ## đ Why This Release Matters Version 0.24.0 represents a significant step forward in making AI development more accessible and efficient: Seamless Integration: The deep integration with GitHub Copilot means AI best practices are now available right where you're already working. End-to-End Workflow: From agent creation to evaluation, you now have comprehensive tooling that guides you through the entire AI development lifecycle. Enhanced Productivity: Improved UI elements and unified experiences reduce friction and help you focus on building great AI applications. đ Get Started and Share Your Feedback Ready to experience the future of AI development? Here's how to get started: đ„ Download: Install the AI Toolkit from the Visual Studio Code Marketplace đ Learn: Explore our comprehensive AI Toolkit Documentation đ Discover: Check out the complete changelog for v0.24.0 We'd love to hear from you! Whether it's a feature request, bug report, or feedback on your experience, join the conversation and contribute directly on our GitHub repository. đŻ What's Next? This release sets the foundation for even more exciting developments ahead. The GitHub Copilot Tools Integration opens up new possibilities for AI-assisted development, and we're just getting started. Stay tuned for more updates, and let's continue building the future of AI agent development together! đĄđŹ Happy coding, and see you next month! đIntroducing the Microsoft Agent Framework
Introducing the Microsoft Agent Framework: A Unified Foundation for AI Agents and Workflows The landscape of AI development is evolving rapidly, and Microsoft is at the forefront with the release of the Microsoft Agent Framework an open-source SDK designed to empower developers to build intelligent, multi-agent systems with ease and precision. Whether you're working in .NET or Python, this framework offers a unified, extensible foundation that merges the best of Semantic Kernel and AutoGen, while introducing powerful new capabilities for agent orchestration and workflow design. Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Introducing Microsoft Agent Framework | Microsoft Azure Blog Why Another Agent Framework? Both Semantic Kernel and AutoGen have pioneered agentic development, Semantic Kernel with its enterprise-grade features and AutoGen with its research-driven abstractions. The Microsoft Agent Framework is the next generation of both, built by the same teams to unify their strengths: AutoGenâs simplicity in multi-agent orchestration. Semantic Kernelâs robustness in thread-based state management, telemetry, and type safety. New capabilities like graph-based workflows, checkpointing, and human-in-the-loop support This convergence means developers no longer have to choose between experimentation and production. The Agent Framework is designed to scale from single-agent prototypes to complex, enterprise-ready systems Core Capabilities AI Agents AI agents are autonomous entities powered by LLMs that can process user inputs, make decisions, call tools and MCP servers, and generate responses. They support providers like Azure OpenAI, OpenAI, and Azure AI, and can be enhanced with: Agent threads for state management. Context providers for memory. Middleware for action interception. MCP clients for tool integration Use cases include customer support, education, code generation, research assistance, and moreâespecially where tasks are dynamic and underspecified. Workflows Workflows are graph-based orchestrations that connect multiple agents and functions to perform complex, multi-step tasks. They support: Type-based routing Conditional logic Checkpointing Human-in-the-loop interactions Multi-agent orchestration patterns (sequential, concurrent, hand-off, Magentic) Workflows are ideal for structured, long-running processes that require reliability and modularity. Developer Experience The Agent Framework is designed to be intuitive and powerful: Installation: Python: pip install agent-framework .NET: dotnet add package Microsoft.Agents.AI Integration: Works with Foundry SDK, MCP SDK, A2A SDK, and M365 Copilot Agents Samples and Manifests: Explore declarative agent manifests and code samples Learning Resources: Microsoft Learn modules AI Agents for Beginners AI Show demos Azure AI Foundry Discord community Migration and Compatibility If you're currently using Semantic Kernel or AutoGen, migration guides are available to help you transition smoothly. The framework is designed to be backward-compatible where possible, and future updates will continue to support community contributions via the GitHub repository. Important Considerations The Agent Framework is in public preview. Feedback and issues are welcome on the GitHub repository. When integrating with third-party servers or agents, review data sharing practices and compliance boundaries carefully. The Microsoft Agent Framework marks a pivotal moment in AI development, bringing together research innovation and enterprise readiness into a single, open-source foundation. Whether you're building your first agent or orchestrating a fleet of them, this framework gives you the tools to do it safely, scalably, and intelligently. Ready to get started? Download the SDK, explore the documentation, and join the community shaping the future of AI agents.How do I choose the right model for my agent?
Welcome back to Agent Supportâa developer advice column for those head-scratching moments when youâre building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. Todayâs question comes from a developer thatâs right at the beginning of their agent building journey and needs a little help choosing a model. đŹ Dear Agent Support Iâm overwhelmed by all the model options out there. Some are small, some are huge. Some are free, some cost a lot. Some say âmultimodalâ but Iâm not sure if I need that. How do I choose the right model for my agent? Great question! Model choice is one of the most important design decisions youâll make. Pick something too small, and your agent may struggle with complex tasks. Go too big, and you could be paying for power you donât need. Letâs break down the key factors to consider. đ§© Capabilities vs. Use Case The firstâand most importantâquestion isnât which model is âbest.â Itâs what does my agent actually need to do? Hereâs a few angles to think through: Input and Output Types Will your agent only handle text, or does it need to process other formats like images, audio, or structured data? Models differ in how many modalities they support and in how well they can handle outputs that must follow strict formatting. Complexity of Tasks Simple, transactional tasks (like pulling information from a document or answering straightforward queries) donât require the same reasoning depth as tasks that involve planning, multi-step logic, or open-ended creativity. Define the level of reasoning and adaptability your agent needs. Control Requirements Some agents need highly controlled outputs (think JSON schemas for downstream services), while others benefit from free-form creativity. The degree of control you need (i.e. structured output, function calling, system prompt) should guide model choice. Domain Knowledge Does your agent operate in a general-purpose domain, or does it need strong understanding of a specific area (like legal, medical, or technical documentation)? Consider whether youâll rely on the modelâs built-in knowledge, retrieval from external sources, or fine-tuning for domain expertise. Interaction Style Will users interact with the agent in short, direct prompts, or longer, conversational exchanges? Some models handle chat-like, multi-turn contexts better than others, while others excel at single-shot completions. In short: start by mapping out your agentâs needs in terms of data types, reasoning depth, control, domain, and interaction style. Once you have that picture, itâs much easier to narrow down which models are a genuine fit, and which ones would be mismatched. âïž Performance vs. Cost Once you know what your agent needs to do, the next trade-off is between performance and cost. Bigger models are often more capable, but they also come with higher latency, usage costs, and infrastructure requirements. The trick is to match âenough performanceâ to the real-world expectations for your agent. Here are some factors to weigh: Task Complexity vs. Model Size If your agentâs tasks involve nuanced reasoning, long-context conversations, or open-ended problem solving, a more capable (and often larger) model may be necessary. On the other hand, for lightweight lookups or structured Q&A, a smaller model can perform just as well, and more efficiently. Response Time Expectations Latency matters. A model that takes 8â10 seconds to respond may be fine in a batch-processing workflow but frustrating in a real-time chat interface. Think about how quickly your users expect the agent to respond and whether youâre willing to trade speed for accuracy. Budget and Token Costs Larger models consume more tokens per request, which translates to higher costs, especially if your agent will scale to many users. Consider both per-request cost and aggregate monthly cost based on expected usage volume. Scaling Strategy Some developers use a âtieredâ approach: route simple queries to a smaller, cheaper model and reserve larger models for complex tasks. This can balance performance with budget without compromising user experience. The Azure AI Founder Model Router performs in a similar manner. Experimentation Over Assumptions Donât assume the largest model is always required. Start with something mid-range, test it against your use case, and only scale up if you see gaps. This iterative approach often prevents overspending. At the end of the day, performance isnât about squeezing the most power out of a model, itâs about choosing the right amount of capability for the job, without paying for what you donât need. đ Licensing and Access Even if youâve found a model that looks perfect on paper, practical constraints around access and licensing can make or break your choice. These considerations often get overlooked until late in the process, but they can have big downstream impacts. A few things to keep in mind: Where the Model Lives Some models are only accessible through a hosted API (like on a cloud provider), while others are open source and can be self-hosted. Hosted APIs are convenient and handle scaling for you, but they also lock you into availability, pricing, and rate limits set by the provider. Self-hosting gives you control, but also means managing infrastructure, updates, and security yourself. Terms of Use Pay attention to licensing restrictions. Some providers limit usage for commercial products, sensitive data, or high-risk domains (like healthcare or finance). Others may require explicit consent or premium tiers to unlock certain capabilities. Data Handling and Privacy If your agent processes sensitive or user-specific data, youâll need to confirm whether the model provider logs, stores, or uses data for training. Check for features like âno data retentionâ modes, private deployments, or enterprise SLAs if compliance is critical. Regional Availability Certain models or features may only be available in specific regions due to infrastructure or regulatory constraints. This matters if your users are global, or if you need to comply with data residency laws (e.g., keeping data in the EU). Support for Deployment Options Consider whether the model can be deployed in the way you needâAPI-based integration, on-prem deployment, or edge devices. If youâre building something that runs locally (say, on a mobile app), an enormous cloud-only model wonât be practical. Longevity and Ecosystem Models evolve quickly. Some experimental models may not be supported long-term, while others are backed by a stable provider with ongoing updates. Think about how much you want to bet on a model that might disappear in six months versus one with a roadmap you can count on. Model choice isnât just about capability and performance, itâs also about whether you can use it under the terms, conditions, and environments that your project requires. đ Exploring Models with Azure AI Foundry Once youâve thought through capabilities, performance trade-offs, and licensing, the next step is exploring whatâs available to you. If youâre building with Azure, this is where the Azure AI Foundry Models becomes invaluable. Instead of guessing which model might fit, you can browse, filter, and compare options directly, complete with detailed model cards that outline features, intended use cases, and limitations. Think of the model catalog as your âshopping guideâ for models: it helps you quickly spot which ones align with your agentâs needs and gives you the fine print before you commit. đ Recap Hereâs a quick rundown of what we covered: Start with capabilities. Match the modelâs strengths to the inputs, outputs, and complexity your agent requires. Balance performance with cost. Bigger isnât always better. Pick the right level of capability without overspending. Review licensing and access. Make sure the model is available in your region, permitted for your use case, and deployed in the way you need. Explore before you build. Use the Azure AI Foundry Model Catalog to filter options, read model cards, and test in the Playground. đș Want to Go Deeper? With so many new models available on an almost daily basis, it can be a challenge to keep up with whatâs new! However, our Model Mondays series has you covered! Each week, we bring to you the latest news in AI models. We also recently launched our brand-new series: Inside Azure AI Foundry. In this series, we dive deep into the latest AI models, tools, and platform features â with practical demos and technical walkthroughs that show you how to integrate them into your workflows. Itâs perfect for developers who want to see capabilities in action before deploying them in real projects. As always remember, your agent doesnât need the âbestâ model on paperâit needs the right model for the job itâs designed to do.How do I catch bad data before it derails my agent?
How do I catch bad data before it derails my agent? When an agent relies on data thatâs incomplete, inconsistent, or plain wrong, every downstream step inherits that problem. You will waste time debugging hallucinations that are actually caused by a stray âNULLâ string, or re-running fine-tunes because of invisible whitespace in a numeric column. Even small quality issues can: Skew model evaluation metrics. Trigger exceptions in your application code. Undermine user trust when answers look obviously off. The bottom line is that a five-minute inspection up front can save hours later.How do I A/B test different versions of my agent?
We tend to think of A/B testing as a marketerâs tool (i.e. headline A vs. headline B). But itâs just as useful for developers building agents. Why? Because most agent improvements are experimental. Youâre changing one or more of the following: the system prompt the model the tool selection the output format the interaction flow But without a structured way to test those changes, youâre just guessing. You might think your updated version is smarter or more helpfulâŠbut until you compare, you wonât know! A/B testing helps you turn instincts into insight and gives you real data to back your next decision.How do I give my agent access to tools?
Welcome back to Agent Supportâa developer advice column for those head-scratching moments when youâre building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. Todayâs question comes from someone trying to move beyond chat-only agents into more capable, action-driven ones: đŹ Dear Agent Support I want my agent to do more than just respond with text. Ideally, it could look up information, call APIs, or even run codeâbut Iâm not sure where to start. How do I give my agent access to tools? This is exactly where agents start to get interesting! Giving your agent tools is one of the most powerful ways to expand what it can do. But before we get into the âhow,â letâs talk about what tools actually mean in this contextâand how Model Context Protocol (MCP) helps you use them in a standardized, agent-friendly way. đ ïž What Do We Mean by âToolsâ? In agent terms, a tool is any external function or capability your agent can use to complete a task. That might be: A web search function A weather lookup API A calculator A database query A custom Python script When you give an agent tools, youâre giving it a way to take actionânot just generate text. Think of tools as buttons the agent can press to interact with the outside world. ⥠Why Give Your Agent Tools? Without tools, an agent is limited to what it âknowsâ from its training data and prompt. It can guess, summarize, and predict, but it canât do. Tools change that! With the right tools, your agent can: Pull live data from external systems Perform dynamic calculations Trigger workflows in real time Make decisions based on changing conditions Itâs the difference between an assistant that can answer trivia questions vs. one that can book your travel or manage your calendar. đ§© So⊠How Does This Work? Enter Model Context Protocol (MCP). MCP is a simple, open protocol that helps agents use tools in a consistent wayâwhether those tools live in your app, your cloud project, or on a server you built yourself. Hereâs what MCP does: Describes tools in a standardized format that models can understand Wraps the function call + input + output into a predictable schema Lets agents request tools as needed (with reasoning behind their choices) This makes it much easier to plug tools into your agent workflow without reinventing the wheel every time! đ How to Connect an Agent to Tools Wiring tools into your agent might sound complex, but it doesnât have to be! If youâve already got a MCP server in mind, thereâs a straightforward way within the AI Toolkit to expose it as a tool your agent can use. Hereâs how to do it: Open the Agent Builder from the AI Toolkit panel in Visual Studio Code. Click the + New Agent button and provide a name for your agent. Select a Model for your agent. Within the Tools section, click + MCP Server. In the wizard that appears, click + Add Server. From there, you can select one of the MCP servers built my Microsoft, connect to an existing server thatâs running, or even create your own using a template! After giving the server a Server ID, youâll be given the option to select which tools from the server to add for your agent. Once connected, your agent can call tools dynamically based on the task at hand. đ§Ș Test Before You Build Once youâve connected your agent to an MCP server and added tools, donât jump straight into full integration. Itâs worth taking time to test whether the agent is calling the right tool for the job. You can do this directly in the Agent Builder: enter a test prompt that should trigger a tool in the User Prompt field, click Run, and observe how the model responds. This gives you a quick read on tool call accuracy. If the agent selects the wrong tool, itâs a sign that your system prompt might need tweaking before you move forward. However, if the agent calls the correct tool but the output still doesnât look right, take a step back and check both sides of the interaction. It might be that the system prompt isnât clearly guiding the agent on how to use or interpret the toolâs response. But it could also be an issue with the tool itselfâwhether thatâs a bug in the logic, unexpected behavior, or a mismatch between input and expected output. Testing both the tool and the prompt in isolation can help you pinpoint where things are going wrong before you move on to full integration. đ Recap Hereâs a quick rundown of what we covered: Tools = external functions your agent can use to take action MCP = a protocol that helps your agent discover and use those tools reliably If the agent calls the wrong toolâor uses the right tool incorrectlyâcheck your system prompt and test the tool logic separately to isolate the issue. đș Want to Go Deeper? Check out my latest video on how to connect your agent to a MCP serverâitâs part of the Build an Agent Series, where I walk through the building blocks of turning an idea into a working AI agent. The MCP for Beginners curriculum covers all the essentialsâMCP architecture, creating and debugging servers, and best practices for developing, testing, and deploying MCP servers and features in production environments. It also includes several hands-on exercises across .NET, Java, TypeScript, JavaScript and Python. đ Explore the full curriculum: aka.ms/AITKmcp And for all your general AI and AI agent questions, join us in the Azure AI Foundry Discord! You can find me hanging out there answering your questions about the AI Toolkit. I'm looking forward to chatting with you there! Whether youâre building a productivity agent, a data assistant, or a game botâtools are how you turn your agent from smart to useful.How do I get my agent to respond in a structured format like JSON?
Welcome back to Agent Supportâa developer advice column for those head-scratching moments when youâre building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. đŹ Dear Agent Support Iâm building an agent that feeds its response into another appâbut sometimes the output is messy or unpredictable. Can you help? This looks like a job for JSON! While agent output often comes back as plain text, there are times when you need something more structured, especially if another system, app, or service needs to reliably parse and use that response. đ§ Whatâs the Deal with JSON Output? If your agent is handing off data to another app, you need the output to be clean, consistent, and easy to parse. Thatâs where JSON comes in. JSON (JavaScript Object Notation) is a lightweight, widely supported format that plays nicely with almost every modern tool or platform. Whether your agent is powering a dashboard, triggering a workflow, or sending data into a UI component, JSON makes the handoff smooth. You can define exactly what shape the response should take (i.e. keys, values, types) so that other parts of your system know what to expect every single time. Without it? Things get messy fast! Unstructured text can vary wildly from one response to the next. A missing field, a misaligned format, or even just an unexpected line break can break downstream logic, crash a frontend, or silently cause bugs you wonât catch until much later. Worse, you end up writing brittle post-processing code to clean up the output just to make it usable. The bottom line: If you want your agent to work well with others, it needs to speak in a structured format. JSON isnât just a nice-to-have, itâs the language of interoperability. đ§© When Youâd Want Structured Output Not every agent needs to speak JSON, but if your response is going anywhere beyond the chat window, structured output is your best bet. Letâs say your agent powers a dashboard. You might want to display the response in different UI componentsâa title, a summary, maybe a set of bullet points. That only works if the response is broken into predictable parts. Same goes for workflows. If youâre passing the output into another service, like Power Automate, an agent framework like Semantic Kernal, or even another agent, it needs to follow a format those systems can recognize. Structured output also makes logging and debugging easier. When every response follows the same format, you can spot problems faster. Missing fields, weird values, or unexpected data types stand out immediately. It also future proofs your agent. If you want to add new features later, like saving responses to a database or triggering different actions based on the content, having structured output gives you the flexibility to build on top of whatâs already there. If the output is meant to do more than just be read by a human, it should be structured for a machine. đ How to Define a JSON Format Once you know your agentâs output needs to be structured, the next step is telling the model exactly how to structure it. Thatâs where defining a schema comes in. In this context, a schema is just a blueprint for the shape of your data. It outlines what fields you expect, what type of data each one should hold, and how everything should be organized. Think of it like a form template: the model just needs to fill in the blanks. Hereâs a simple example of a JSON schema for a to-do app: { "task": "string", "priority": "high | medium | low", "due_date": "YYYY-MM-DD" } Once you have your format in mind, include it directly in your prompt. But if youâre using the AI Toolkit in Visual Studio Code, you donât have to do this manually every time! đ§ Create a JSON schema with the AI Toolkit The Agent Builder feature supports structured output natively. You can provide a JSON schema alongside your prompt, and the agent will automatically aim to match that format. This takes the guesswork out of prompting. Instead of relying on natural language instructions to shape the output, youâre giving the model a concrete set of instructions to follow. Hereâs how to do it: Open the Agent Builder from the AI Toolkit panel in Visual Studio Code. Click the + New Agent button and provide a name for your agent. Select a Model for your agent. Within the System Prompt section, enter: You recommend a movie to watch. Within the User Prompt section, enter: Recommend a science-fiction movie with robots. In the Structured Output section, select json_schema as the output format. Click Prepare schema. In the wizard, select Use an example. For the example, select paper_metadata. Save the file to your desired location. You can name the file: movie_metadata. In the Agent Builder, select movie_metadata to open the file. Using the template provided, modify the schema to format the title, genre, year, and reason. Once done, save the file. { "name": "movie_metadata", "strict": true, "schema": { "type": "object", "properties": { "title": { "type": "string" }, "genre": { "type": "array", "items": { "type": "string" } }, "year": { "type": "string" }, "reason": { "type": "array", "items": { "type": "string" } } }, "required": [ "title", "genre", "year", "reason" ], "additionalProperties": false } } And just like that, youâve set up a JSON schema for your agentâs output! đ§Ș Test Before You Build You can submit a prompt with the Agent Builder to validate whether the agent adheres to the JSON schema when returning its response. When you click Run, the agentâs response will appear in the Prompt tab, ideally in JSON format. đ Recap Hereâs a quick rundown of what we covered: JSON lets you create reliable, machine-friendly agent responses. JSON is essential for interoperability between apps, tools, and workflows. Without JSON, you risk fragile pipelines, broken features, or confusing bugs. The AI Toolkit supports including a JSON schema with your agentâs prompt. đș Want to Go Deeper? Check out the AI Agents for Beginners curriculum in which we dive a bit more into agentic design patterns which includes defining structured outputs. Weâll have a video landing soon in the Build an Agent Series that takes you through a step-by-step look of creating a JSON schema for your agent!What models can I use for free while prototyping?
Welcome back to Agent Supportâa developer advice column for those head-scratching moments when youâre building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. Todayâs question comes from someone still in the prototyping phaseâlooking to use free models until theyâre ready to commit: đŹ Dear Agent Support Iâm experimenting with different agent ideas, but Iâm not ready to pay for API credits just yet. Are there any models I can use for free? Short answer: yes, and youâve got a couple of good options! Letâs break them down. đ§ GitHub Models: A Free Way to Experiment If youâre just getting started with agents and want a no-cost way to try things out, GitHub-hosted models are a great option. These models are maintained by the GitHub team and run entirely on GitHubâs infrastructure, so you donât need to bring your own API key or worry about usage fees. Theyâre designed for prototyping and lightweight experimentation, making them ideal for testing out ideas, building proof-of-concepts, or just getting familiar with how agents and models interact. You can try them directly in the GitHub web interface or through tools like the AI Toolkit, which includes them in its Model Catalog. Many supports common features like structured output, chat history, and tool use, and are regularly updated to reflect community needs. Think of these as your training wheels: stable, reliable, and free to use while you explore what your agent can do. â ïž But Beware of Rate Limits Free models are great for prototypingâŠbut thereâs a catch! GitHub-hosted models come with usage limits, which means you might hit a wall if youâre testing frequently, building complex agents, or collaborating with teammates. These rate limits exist to ensure fair access for everyone using the shared infrastructure, especially during peak demand. If youâve ever wondered why your responses stop, itâs probably because youâve reached the cap for the day. The good news? GitHub recently introduced a Pay-As-You-Go option. This lets you continue using the same hosted models with more generous limits, only paying for what you use. Itâs a helpful bridge for developers whoâve outgrown the free tier but arenât ready to commit to a full API plan with another provider. If your agent is starting to feel constrained, this might be the right moment to switch gears. đ„ïž Want More Control? Run a Local Model If youâd rather skip rate limits altogether, or just prefer running things on your own machine, you could always use a local model. Local models give you full control over how the model runs, how often you use it, and what kind of hardware it runs on. Thereâs no API key, no usage tracking, and no hidden costs. Itâs just you and the model, running side by side. You can download and host open-source models like LLaMA, Mistral, or Code Phi models using tools like Ollama or Foundry Local, which makes setup surprisingly simple. Most local models are optimized to run efficiently on consumer-grade hardware, so even a decent laptop can handle basic inference. This is especially handy if you're experimenting with sensitive data, need offline access, or want to test agents in environments where cloud isnât an option. Of course, going local means youâre responsible for setup, performance tuning, and hardware compatibility, but for many developers, that tradeoff is worth it! đ§° Ready to Try One Out? Whether youâre curious about GitHub-hosted models or want to use a local one, the Models feature within the AI Toolkit makes it easy to test them out, no custom setup required. With just a few clicks, you can browse available models, run test prompts in the Playground, and even use them with the agent youâre building. Hereâs how to do it: Use a GitHub Model Open the Model Catalog from the AI Toolkit panel in Visual Studio Code. Click the Hosted by filter near the search bar. Select GitHub. Browse the filtered results. Select + Add Model to add a model to your list of models. Use a Local Model Note: Only Ollama and Custom ONNX models are currently supported. The instructions below only focus on adding local models from Ollama. Download your chosen local model to your computer. From the AI Toolkit panel in Visual Studio Code, hover over My Models and select the + icon. In the wizard, select Add Ollama Model. In the wizard, select Select models from Ollama library. This will provide a list of the models available in your local Ollama library (i.e. models youâve downloaded). In the wizard, select the model(s) you want to connect and click OK. Your cost-free models are now available to use with your agent! If youâre unsure whether a model is a GitHub model or Ollama model, you can view its category within the My Models section of the AI Toolkit panel. The models within that section are organized by model source/host. đ§Ș Test Before You Build Whether youâve added a GitHub hosted model or a local model, you can chat with the models in the Playground or within the Agent Builder. The model is available for selection within the Model drop-down. As a reminder, GitHub hosted models have rate limits. If you hit a rate limit with a GitHub hosted model, the AI Toolkit will provide a notification presenting you with the option to either use GitHub pay-as-you-go models or Deploy to Azure AI Foundry for higher limits. Whichever path you choose, the AI Toolkit helps you prototype with confidence, giving you flexibility early on and clear upgrade paths when you're ready to scale! đ Recap Hereâs a quick rundown of what we covered: GitHub-hosted models let you start building fast, with no API keys or fees, but they do come with rate limits. GitHub Pay-As-You-Go gives you a way to scale up without switching tools. Local models give you full control and zero rate limits, just run them on your own machine using tools like Ollama. The AI Toolkit supports both options, letting you chat with models, test prompts, and build agents right inside VS Code. đș Want to Go Deeper? With so many models available these days, it can feel overwhelming to keep tabs on whatâs available. Check out the Model Mondays series for all the latest news on language models! By the way, GitHub has guides on discovering and experimenting with free AI models; definitely worth a read if you want to understand whatâs under the hood. Check out their articles on GitHub Models. No matter where you are in your agent journey, having free, flexible model options means you can spend more time buildingâand less time worrying about the bill.