ai foundry
13 TopicsI want to show my agent a picture—Can I?
Welcome to Agent Support—a developer advice column for those head-scratching moments when you’re building an AI agent! Each post answers a question inspired by real conversations in the AI developer community, offering practical advice and tips. To kick things off, we’re tackling a common challenge for anyone experimenting with multimodal agents: working with image input. Let’s dive in! Dear Agent Support, I’m building an AI agent, and I’d like to include screenshots or product photos as part of the input. But I’m not sure if that’s even possible, or if I need to use a different kind of model altogether. Can I actually upload an image and have the agent process it? Great question, and one that trips up a lot of people early on! The short answer is: yes, some models can process images—but not all of them. Let’s break that down a bit. 🧠 Understanding Image Input When we talk about image input or image attachments, we’re talking about the ability to send a non-text file (like a .png, .jpg, or screenshot) into your prompt and have the model analyze or interpret it. That could mean describing what’s in the image, extracting text from it, answering questions about a chart, or giving feedback on a design layout. 🚫 Not All Models Support Image Input That said, this isn’t something every model can do. Most base language models are trained on text data only, they’re not designed to interpret non-text inputs like images. In most tools and interfaces, the option to upload an image only appears if the selected model supports it, since platforms typically hide or disable features that aren't compatible with a model's capabilities. So, if your current chat interface doesn’t mention anything about vision or image input, it’s likely because the model itself isn’t equipped to handle it. That’s where multimodal models come in. These are models that have been trained (or extended) to understand both text and images, and sometimes other data types too. Think of them as being fluent in more than one language, except in this case, one of those “languages” is visual. 🔎 How to Find Image-Supporting Models If you’re trying to figure out which models support images, the AI Toolkit is a great place to start! The extension includes a built-in Model Catalog where you can filter models by Feature—like Image Attachment—so you can skip the guesswork. Here’s how to do it: Open the Model Catalog from the AI Toolkit panel in Visual Studio Code. Click the Feature filter near the search bar. Select Image Attachment. Browse the filtered results to see which models can accept visual input. Once you've got your filtered list, you can check out the model details or try one in the Playground to test how it handles image-based prompts. 🧪 Test Before You Build Before you plug a model into your agent and start wiring things together, it’s a good idea to test how the model handles image input on its own. This gives you a quick feel for the model’s behavior and helps you catch any limitations before you're deep into building. You can do this in the Playground, where you can upload an image and pair it with a simple prompt like: “Describe the contents of this image.” OR “Summarize what’s happening in this screenshot.” If the model supports image input, you’ll be able to attach a file and get a response based on its visual analysis. If you don’t see the option to upload an image, double-check that the model you’ve selected has image capabilities—this is usually a model issue, not a UI bug. 🔁 Recap Here’s a quick rundown of what we covered: Not all models support image input—you’ll need a multimodal model specifically built to handle visual data. Most platforms won’t let you upload an image unless the model supports it, so if you don’t see that option, it’s probably a model limitation. You can use the AI Toolkit’s Model Catalog to filter models by capability—just check the box for Image Attachment. Test the model in the Playground before integrating it into your agent to make sure it behaves the way you expect. 📺 Want to Go Deeper? Check out my latest video on how to choose the right model for your agent—it’s part of the Build an Agent Series, where I walk through the building blocks of turning an idea into a working AI agent. And if you’re looking to sharpen your model instincts, don’t miss Model Mondays—a weekly series that helps developers like you build your Model IQ, one spotlight at a time. Whether you’re just starting out or already building AI-powered apps, it’s a great way to stay current and confident in your model choices. 👉 Explore the series and catch the next episode: aka.ms/model-mondays/rsvp If you're just getting started with building agents, check out our Agents for Beginners curriculum. And for all your general AI and AI agent questions, join us in the Azure AI Foundry Discord! You can find me hanging out there answering your questions about the AI Toolkit. I'm looking forward to chatting with you there! Whatever you're building, the right model is out there—and with the right tools, you'll know exactly how to find it.Impariamo a conoscere MCP: Introduzione al Model Context Protocol (MCP)
Non perderti il prossimo evento “Let’s Learn – MCP” su Microsoft Reactor il 24 di Luglio, pensato per chiunque voglia conoscere meglio il nuovo standard per agenti intelligenti (il Model Context Protocol) e imparare a metterlo in pratica. La sessione è in Italiano e le demo sono in Python, ma fa parte di una serie di live-streaming disponibili in tantissime lingue.AI Repo of the Week: Generative AI for Beginners with JavaScript
Introduction Ready to explore the fascinating world of Generative AI using your JavaScript skills? This week’s featured repository, Generative AI for Beginners with JavaScript, is your launchpad into the future of application development. Whether you're just starting out or looking to expand your AI toolbox, this open-source GitHub resource offers a rich, hands-on journey. It includes interactive lessons, quizzes, and even time-travel storytelling featuring historical legends like Leonardo da Vinci and Ada Lovelace. Each chapter combines narrative-driven learning with practical exercises, helping you understand foundational AI concepts and apply them directly in code. It’s immersive, educational, and genuinely fun. What You'll Learn 1. 🧠 Foundations of Generative AI and LLMs Start with the basics: What is generative AI? How do large language models (LLMs) work? This chapter lays the groundwork for how these technologies are transforming JavaScript development. 2. 🚀 Build Your First AI-Powered App Walk through setting up your environment and creating your first AI app. Learn how to configure prompts and unlock the potential of AI in your own projects. 3. 🎯 Prompt Engineering Essentials Get hands-on with prompt engineering techniques that shape how AI models respond. Explore strategies for crafting prompts that are clear, targeted, and effective. 4. 📦 Structured Output with JSON Learn how to guide the model to return structured data formats like JSON—critical for integrating AI into real-world applications. 5. 🔍 Retrieval-Augmented Generation (RAG) Go beyond static prompts by combining LLMs with external data sources. Discover how RAG lets your app pull in live, contextual information for more intelligent results. 6. 🛠️ Function Calling and Tool Use Give your LLM new powers! Learn how to connect your own functions and tools to your app, enabling more dynamic and actionable AI interactions. 7. 📚 Model Context Protocol (MCP) Dive into MCP, a new standard for organizing prompts, tools, and resources. Learn how it simplifies AI app development and fosters consistency across projects. 8. ⚙️ Enhancing MCP Clients with LLMs Build on what you’ve learned by integrating LLMs directly into your MCP clients. See how to make them smarter, faster, and more helpful. ✨ More chapters coming soon—watch the repo for updates! Companion App: Interact with History Experience the power of generative AI in action through the companion web app—where you can chat with historical figures and witness how JavaScript brings AI to life in real time. Conclusion Generative AI for Beginners with JavaScript is more than a course—it’s an adventure into how storytelling, coding, and AI can come together to create something fun and educational. Whether you're here to upskill, experiment, or build the next big thing, this repository is your all-in-one resource to get started with confidence. 🔗 Jump into the future of development—check out the repo and start building with AI today!GPT-5 Family of Models & GPT OSS Are Now Available in AI Toolkit for VS Code
AI Toolkit v0.18.3 is here—and it’s a major milestone. This release introduces full support for: The latest GPT-5 family of models OpenAI’s open-source models (GPT OSS) via Azure AI Foundry and ONNX Runtime Whether you're building in the cloud, on the edge, or experimenting locally, this update gives you more flexibility, performance, and choice than ever.How do I give my agent access to tools?
Welcome back to Agent Support—a developer advice column for those head-scratching moments when you’re building an AI agent! Each post answers a real question from the community with simple, practical guidance to help you build smarter agents. Today’s question comes from someone trying to move beyond chat-only agents into more capable, action-driven ones: 💬 Dear Agent Support I want my agent to do more than just respond with text. Ideally, it could look up information, call APIs, or even run code—but I’m not sure where to start. How do I give my agent access to tools? This is exactly where agents start to get interesting! Giving your agent tools is one of the most powerful ways to expand what it can do. But before we get into the “how,” let’s talk about what tools actually mean in this context—and how Model Context Protocol (MCP) helps you use them in a standardized, agent-friendly way. 🛠️ What Do We Mean by “Tools”? In agent terms, a tool is any external function or capability your agent can use to complete a task. That might be: A web search function A weather lookup API A calculator A database query A custom Python script When you give an agent tools, you’re giving it a way to take action—not just generate text. Think of tools as buttons the agent can press to interact with the outside world. ⚡ Why Give Your Agent Tools? Without tools, an agent is limited to what it “knows” from its training data and prompt. It can guess, summarize, and predict, but it can’t do. Tools change that! With the right tools, your agent can: Pull live data from external systems Perform dynamic calculations Trigger workflows in real time Make decisions based on changing conditions It’s the difference between an assistant that can answer trivia questions vs. one that can book your travel or manage your calendar. 🧩 So… How Does This Work? Enter Model Context Protocol (MCP). MCP is a simple, open protocol that helps agents use tools in a consistent way—whether those tools live in your app, your cloud project, or on a server you built yourself. Here’s what MCP does: Describes tools in a standardized format that models can understand Wraps the function call + input + output into a predictable schema Lets agents request tools as needed (with reasoning behind their choices) This makes it much easier to plug tools into your agent workflow without reinventing the wheel every time! 🔌 How to Connect an Agent to Tools Wiring tools into your agent might sound complex, but it doesn’t have to be! If you’ve already got a MCP server in mind, there’s a straightforward way within the AI Toolkit to expose it as a tool your agent can use. Here’s how to do it: Open the Agent Builder from the AI Toolkit panel in Visual Studio Code. Click the + New Agent button and provide a name for your agent. Select a Model for your agent. Within the Tools section, click + MCP Server. In the wizard that appears, click + Add Server. From there, you can select one of the MCP servers built my Microsoft, connect to an existing server that’s running, or even create your own using a template! After giving the server a Server ID, you’ll be given the option to select which tools from the server to add for your agent. Once connected, your agent can call tools dynamically based on the task at hand. 🧪 Test Before You Build Once you’ve connected your agent to an MCP server and added tools, don’t jump straight into full integration. It’s worth taking time to test whether the agent is calling the right tool for the job. You can do this directly in the Agent Builder: enter a test prompt that should trigger a tool in the User Prompt field, click Run, and observe how the model responds. This gives you a quick read on tool call accuracy. If the agent selects the wrong tool, it’s a sign that your system prompt might need tweaking before you move forward. However, if the agent calls the correct tool but the output still doesn’t look right, take a step back and check both sides of the interaction. It might be that the system prompt isn’t clearly guiding the agent on how to use or interpret the tool’s response. But it could also be an issue with the tool itself—whether that’s a bug in the logic, unexpected behavior, or a mismatch between input and expected output. Testing both the tool and the prompt in isolation can help you pinpoint where things are going wrong before you move on to full integration. 🔁 Recap Here’s a quick rundown of what we covered: Tools = external functions your agent can use to take action MCP = a protocol that helps your agent discover and use those tools reliably If the agent calls the wrong tool—or uses the right tool incorrectly—check your system prompt and test the tool logic separately to isolate the issue. 📺 Want to Go Deeper? Check out my latest video on how to connect your agent to a MCP server—it’s part of the Build an Agent Series, where I walk through the building blocks of turning an idea into a working AI agent. The MCP for Beginners curriculum covers all the essentials—MCP architecture, creating and debugging servers, and best practices for developing, testing, and deploying MCP servers and features in production environments. It also includes several hands-on exercises across .NET, Java, TypeScript, JavaScript and Python. 👉 Explore the full curriculum: aka.ms/AITKmcp And for all your general AI and AI agent questions, join us in the Azure AI Foundry Discord! You can find me hanging out there answering your questions about the AI Toolkit. I'm looking forward to chatting with you there! Whether you’re building a productivity agent, a data assistant, or a game bot—tools are how you turn your agent from smart to useful.