foundry tools
6 TopicsEvaluating AI Agents: More than just LLMs
Artificial intelligence agents are undeniably one of the hottest topics at the forefront of today’s tech landscape. As more individuals and organizations increasingly rely on AI agents to simplify their daily lives—whether through automating routine tasks, assisting with decision-making, or enhancing productivity—it's clear that intelligent agents are not just a passing trend. But with great power comes greater scrutiny--or, from our perspective, it at least deserves greater scrutiny. Despite their growing popularity, one concern that we often hear about is the following: Is my agent doing the right things in the right way? Well—it can be measured from many aspects to understand the agent’s behavior—and this is why agent evaluators come into play. Why Agent Evaluation Matters Unlike traditional LLMs, which primarily generate responses to user prompts, AI agents take action. They can search the web, schedule your meetings, generate reports, send emails, or even interact with your internal systems. A great example of this evolution is GitHub Copilot’s Agent Mode in Visual Studio Code. While the standard “Ask” or “Edit” modes are powerful in their own right, Agent Mode takes things further. It can draft and refine code, iterate on its own suggestions, detect bugs, and fix them—all from a single user request. It’s not just answering questions; it’s solving problems end-to-end. This makes them inherently more powerful—and more complex to evaluate. Here’s why agent evaluation is fundamentally different from LLM evaluation: Dimension LLM Evaluation Agent Evaluation Core Function Content (text, image/video, audio, etc.) generation Action + reasoning + execution Common Metrics Accuracy, Precision, Recall, F1 Score Tool usage accuracy, Task success rate, Intent resolution, Latency Risk Misinformation or hallucination Security breaches, wrong actions, data leakage Human-likeness Optional Often required (tone, memory, continuity) Ethical Concerns Content safety Moral alignment, fairness, privacy, security, execution transparency, preventing harmful actions Shared Evaluation Concerns Latency, Cost, Privacy, Security, Fairness, Moral alignment, etc. Take something as seemingly straightforward as latency. It’s a common metric across both LLMs and agents, often used as a key performance indicator. But once we enter the world of agentic systems, things get complicated—fast. For LLMs, latency is usually simple: measure the time from input to response. But for agents? A single task might involve multiple turns, delayed responses, or even real-world actions that are outside the model’s control. An agent might run a SQL query on a poorly performing cluster, triggering latency that’s caused by external systems—not the agent itself. And that’s not all. What does “done” even mean in an agentic context? If the agent is waiting on user input, has it finished? Or is it still "thinking"? These nuances make it tricky to draw clear latency boundaries. In short, agentic evaluations – even for common metrics like latency—are not just harder than evaluating an LLM. It’s an entirely different game. What to Measure in Agent Evaluation To assess an AI agent effectively, we must consider the following dimensions: Task Success Rate – Can the agent complete what it was asked to do? Tool Use Accuracy – Does the agent call the right tool with the correct parameters? Intent Resolution – Does it understand the user’s request correctly? Prompt Efficiency – Is the agent generating efficient and concise prompts for downstream models or tools? Safety and Alignment – Is the agent filtering harmful content, respecting privacy, and avoiding unsafe actions? Trust and Security – Do users feel confident relying on the agent? Does my agent have the right level of access to sensitive information and available actions? Response Latency and Reliability – How fast and consistent are the agent’s responses across contexts? Red-Teaming evaluations – These evaluation metrics focus on the potential misuse of agents and test for different types of attacks such as personal identifiable information leakage attacks and tool poisoning attacks. This is especially critical for non-chat completion agents — those that don’t merely chat but execute workflows, navigate APIs, or trigger automations. Their evaluation requires scenario simulation, observability instrumentation, and fine-grained analytics. Case Study: Evaluating a Simple AI Agent for Holiday Countdown To illustrate how to evaluate an AI agent, let’s walk through a simple use case: A simple AI agent for sending holiday cards. Agent Setup Platform: Microsoft Foundry Instruction: " You are an assistant agent designed to answer simple questions about the number of days remaining until holidays, and to generate and send holiday cards. Always use the tool to retrieve the current date and the official date of the holiday in question. Based on this information, calculate and provide the number of days left until the holiday. If the holiday is less than 90 days away, generate a personalized holiday card for the occasion. Then, send the holiday card as a gift to <Email address>." Tool: Bing Search, Azure Logic Apps (sending emails through Outlook) LLM Model: GPT-4.1 Example Request Query Response How many days left until New Year? "Today is July 23, 2025. New Year's Day is January 1, 2026. There are 162 days left until New Year." How many days left to Labor Day? "Today is July 25, 2025. Labor Day in 2025 falls on September 1, 2025. There are 38 days left until Labor Day! Since it's less than 90 days away, here's a personalized holiday card for the occasion:" Evaluation Dimensions Task Success Rate Goal: The agent should correctly identify the holiday and current date, then return the accurate number of days left. Evaluation: I tested 10 different holidays, and all were successfully returned. Task success rate = 10/10 = 100%. What’s even better? Microsoft Foundry provides a built-in LLM-based evaluator for task adherence that we can leverage directly: Tool Use Accuracy Goal: The agent should always use the tool to search for holidays and the current date—even if the LLM already knows the answer. It must call the correct tool (Bing Search) with appropriate parameters. Evaluation: Initially, the agent failed to call Bing Search when it already "knew" the date. After updating the instruction to explicitly say "use Bing Search" instead of “use tool”, tool usage became consistent-- clear instructions can improve tool-calling accuracy. Intent Resolution Goal: The agent must understand that the user wants a countdown to the next holiday mentioned, not a list of all holidays or historical data, and should understand when to send holiday card. Evaluation: The agent correctly interpreted the intent, returned countdowns, and sent holiday cards when conditions were met. Microsoft Foundry’s built-in evaluator confirmed this behavior. Prompt Efficiency Goal: The agent should generate minimal, effective prompts for downstream tools or models. Evaluation: Prompts were concise and effective, with no redundant or verbose phrasing. Safety and Alignment Goal: Ensure the agent does not expose sensitive calendar data or make assumptions about user preferences. Evaluation: For example, when asked: “How many days are left until my next birthday?” The agent doesn’t know who I am and doesn’t have access to my personal calendar, where I marked my birthday with a 🎂 emoji. So, the agent should not be able to answer this question accurately — and if it does, then you should be concerned. Trust and Security Goal: The agent should only access public holiday data and not require sensitive permissions. Evaluation: The agent did not request or require any sensitive permissions—this is a positive indicator of secure design. Response Latency and Reliability Goal: The agent should respond quickly and consistently across different times and locations. Evaluation: Average response time was 1.8 seconds, which is acceptable. The agent returned consistent results across 10 repeated queries. Red-Teaming Evaluations Goal: Test the agent for vulnerabilities such as: * PII Leakage: Does it accidentally reveal user-specific calendar data? * Tool Poisoning: Can it be tricked into calling a malicious or irrelevant tool? Evaluation: These risks are not relevant for this simple agent, as it only accesses public data and uses a single trusted tool. Even for a simple assistant agent that answers holiday countdown questions and sends holiday cards, its performance can and should be measured across multiple dimensions, especially since it can call tools on behalf of the user. These metrics can then be used to guide future improvements to the agent – at least for our simple holiday countdown agent, we should replace the ambiguous term “tool” with the specific term “Bing Search” to improve the accuracy and reliability of tool invocation. Key Learnings from Agent Evaluation As I continue to run evaluations on the AI agents we build, several valuable insights have emerged from real-world usage. Here are some lessons I learned: Tool Overuse: Some agents tend to over-invoke tools, which increases latency and can confuse users. Through prompt optimization, we reduced unnecessary tool calls significantly, improving responsiveness and clarity. Ambiguous User Intents: What often appears as a “bad” response is frequently caused by vague or overloaded user instructions. Incorporating intent clarification steps significantly improved user satisfaction and agent performance. Trust and Transparency: Even highly accurate agents can lose user trust if their reasoning isn’t transparent. Simple changes—like verbalizing decision logic or asking for confirmation—led to noticeable improvements in user retention. Balancing Safety and Utility: Overly strict content filters can suppress helpful outputs. We found that carefully tuning safety mechanisms is essential to maintain both protection and functionality. How Microsoft Foundry Helps Microsoft Foundry provide a robust suite of tools to support both LLM and agent evaluation: General purpose evaluators for generative AI - Microsoft Foundry | Microsoft Learn By embedding evaluation into the agent development lifecycle, we move from reactive debugging to proactive quality control.1KViews2likes0CommentsHybrid AI Using Foundry Local, Microsoft Foundry and the Agent Framework - Part 2
Background In Part 1, we explored how a local LLM running entirely on your GPU can call out to the cloud for advanced capabilities The theme was: Keep your data local. Pull intelligence in only when necessary. That was local-first AI calling cloud agents as needed. This time, the cloud is in charge, and the user interacts with a Microsoft Foundry hosted agent — but whenever it needs private, sensitive, or user-specific information, it securely “calls back home” to a local agent running next to the user via MCP. Think of it as: The cloud agent = specialist doctor The local agent = your health coach who you trust and who knows your medical history The cloud never sees your raw medical history The local agent only shares the minimum amount of information needed to support the cloud agent reasoning This hybrid intelligence pattern respects privacy while still benefiting from hosted frontier-level reasoning. Disclaimer: The diagnostic results, symptom checker, and any medical guidance provided in this article are for illustrative and informational purposes only. They are not intended to provide medical advice, diagnosis, or treatment. Architecture Overview The diagram illustrates a hybrid AI workflow where a Microsoft Foundry–hosted agent in Azure works together with a local MCP server running on the user’s machine. The cloud agent receives user symptoms and uses a frontier model (GPT-4.1) for reasoning, but when it needs personal context—like medical history—it securely calls back into the local MCP Health Coach via a dev-tunnel. The local MCP server queries a local GPU-accelerated LLM (Phi-4-mini via Foundry Local) along with stored health-history JSON, returning only the minimal structured background the cloud model needs. The cloud agent then combines both pieces—its own reasoning plus the local context—to produce tailored recommendations, all while sensitive data stays fully on the user’s device. Hosting the agent in Microsoft Foundry on Azure provides a reliable, scalable orchestration layer that integrates directly with Azure identity, monitoring, and governance. It lets you keep your logic, policies, and reasoning engine in the cloud, while still delegating private or resource-intensive tasks to your local environment. This gives you the best of both worlds: enterprise-grade control and flexibility with edge-level privacy and efficiency. Demo Setup Create the Cloud Hosted Agent Using Microsoft Foundry, I created an agent in the UI and pick gpt-4.1 as model: I provided rigorous instructions as system prompt: You are a medical-specialist reasoning assistant for non-emergency triage. You do NOT have access to the patient’s identity or private medical history. A privacy firewall limits what you can see. A local “Personal Health Coach” LLM exists on the user’s device. It holds the patient’s full medical history privately. You may request information from this local model ONLY by calling the MCP tool: get_patient_background(symptoms) This tool returns a privacy-safe, PII-free medical summary, including: - chronic conditions - allergies - medications - relevant risk factors - relevant recent labs - family history relevance - age group Rules: 1. When symptoms are provided, ALWAYS call get_patient_background BEFORE reasoning. 2. NEVER guess or invent medical history — always retrieve it from the local tool. 3. NEVER ask the user for sensitive personal details. The local model handles that. 4. After the tool runs, combine: (a) the patient_background output (b) the user’s reported symptoms to deliver high-level triage guidance. 5. Do not diagnose or prescribe medication. 6. Always end with: “This is not medical advice.” You MUST display the section “Local Health Coach Summary:” containing the JSON returned from the tool before giving your reasoning. Build the Local MCP Server (Local LLM + Personal Medical Memory) The full code for the MCP server is available here but here are the main parts: HTTP JSON-RPC Wrapper (“MCP Gateway”) The first part of the server exposes a minimal HTTP API that accepts MCP-style JSON-RPC messages and routes them to handler functions: Listens on a local port Accepts POST JSON-RPC Normalizes the payload Passes requests to handle_mcp_request() Returns JSON-RPC responses Exposes initialize and tools/list class MCPHandler(BaseHTTPRequestHandler): def _set_headers(self, status=200): self.send_response(status) self.send_header("Content-Type", "application/json") self.end_headers() def do_GET(self): self._set_headers() self.wfile.write(b"OK") def do_POST(self): content_len = int(self.headers.get("Content-Length", 0)) raw = self.rfile.read(content_len) print("---- RAW BODY ----") print(raw) print("-------------------") try: req = json.loads(raw.decode("utf-8")) except: self._set_headers(400) self.wfile.write(b'{"error":"Invalid JSON"}') return resp = handle_mcp_request(req) self._set_headers() self.wfile.write(json.dumps(resp).encode("utf-8")) Tool Definition: get_patient_background This section defines the tool contract exposed to Azure AI Foundry. The hosted agent sees this tool exactly as if it were a cloud function: Advertises the tool via tools/list Accepts arguments passed from the cloud agent Delegates local reasoning to the GPU LLM Returns structured JSON back to the cloud def handle_mcp_request(req): method = req.get("method") req_id = req.get("id") if method == "tools/list": return { "jsonrpc": "2.0", "id": req_id, "result": { "tools": [ { "name": "get_patient_background", "description": "Returns anonymized personal medical context using your local LLM.", "inputSchema": { "type": "object", "properties": { "symptoms": {"type": "string"} }, "required": ["symptoms"] } } ] } } if method == "tools/call": tool = req["params"]["name"] args = req["params"]["arguments"] if tool == "get_patient_background": symptoms = args.get("symptoms", "") summary = summarize_patient_locally(symptoms) return { "jsonrpc": "2.0", "id": req_id, "result": { "content": [ { "type": "text", "text": json.dumps(summary) } ] } } Local GPU LLM Caller (Foundry Local Integration) This is where personalization happens — entirely on your machine, not in the cloud: Calls the local GPU LLM through Foundry Local Injects private medical data (loaded from a file or memory) Produces anonymized structured outputs Logs debug info so you can see when local inference is running FOUNDRY_LOCAL_BASE_URL = "http://127.0.0.1:52403" FOUNDRY_LOCAL_CHAT_URL = f"{FOUNDRY_LOCAL_BASE_URL}/v1/chat/completions" FOUNDRY_LOCAL_MODEL_ID = "Phi-4-mini-instruct-cuda-gpu:5" def summarize_patient_locally(symptoms: str): print("[LOCAL] Calling Foundry Local GPU model...") payload = { "model": FOUNDRY_LOCAL_MODEL_ID, "messages": [ {"role": "system", "content": PERSONAL_SYSTEM_PROMPT}, {"role": "user", "content": symptoms} ], "max_tokens": 300, "temperature": 0.1 } resp = requests.post( FOUNDRY_LOCAL_CHAT_URL, headers={"Content-Type": "application/json"}, data=json.dumps(payload), timeout=60 ) llm_content = resp.json()["choices"][0]["message"]["content"] print("[LOCAL] Raw content:\n", llm_content) cleaned = _strip_code_fences(llm_content) return json.loads(cleaned) Start a Dev Tunnel Now we need to do some plumbing work to make sure the cloud can resolve the MCP endpoint. I used Azure Dev Tunnels for this demo. The snippet below shows how to set that up in 4 PowerShell commands: PS C:\Windows\system32> winget install Microsoft.DevTunnel PS C:\Windows\system32> devtunnel create mcp-health PS C:\Windows\system32> devtunnel port create mcp-health -p 8081 --protocol http PS C:\Windows\system32> devtunnel host mcp-health I have now a public endpoint: https://xxxxxxxxx.devtunnels.ms:8081 Wire Everything Together in Azure AI Foundry Now let's us the UI to add a new custom tool as MCP for our agent: And point to the public endpoint created previously: Voila, we're done with the setup, let's test it Demo: The Cloud Agent Talks to Your Local Private LLM I am going to use a simple prompt in the agent: “Hi, I’ve been feeling feverish, fatigued, and a bit short of breath since yesterday. Should I be worried?” Disclaimer: The diagnostic results and any medical guidance provided in this article are for illustrative and informational purposes only. They are not intended to provide medical advice, diagnosis, or treatment. Below is the sequence shown in real time: Conclusion — Why This Hybrid Pattern Matters Hybrid AI lets you place intelligence exactly where it belongs: high-value reasoning in the cloud, sensitive or contextual data on the local machine. This protects privacy while reducing cloud compute costs—routine lookups, context gathering, and personal history retrieval can all run on lightweight local models instead of expensive frontier models. This pattern also unlocks powerful real-world applications: local financial data paired with cloud financial analysis, on-device coding knowledge combined with cloud-scale refactoring, or local corporate context augmenting cloud automation agents. In industrial and edge environments, local agents can sit directly next to the action—embedded in factory sensors, cameras, kiosks, or ambient IoT devices—providing instant, private intelligence while the cloud handles complex decision-making. Hybrid AI turns every device into an intelligent collaborator, and every cloud agent into a specialist that can safely leverage local expertise. References Get started with Foundry Local - Foundry Local: https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started?view=foundry-classic Using MCP tools with Agents (Microsoft Agent Framework) — https://learn.microsoft.com/en-us/agent-framework/user-guide/model-context-protocol/using-mcp-tools Microsoft Learn Build Agents using Model Context Protocol on Azure — https://learn.microsoft.com/en-us/azure/developer/ai/intro-agents-mcp Microsoft Learn Full demo repo available here.621Views1like0CommentsAnnouncing Elastic MCP Server in Microsoft Foundry Tool Catalog
Introduction The future of enterprise AI is agentic - driven by intelligent, context-aware agents that deliver real business value. Microsoft Foundry is committed to enabling developers with the tools and integrations they need to build, deploy, and govern these advanced AI solutions. Today, we are excited to announce that Elastic MCP Server is now discoverable in the Microsoft Foundry Tool Catalog, unlocking seamless access to Elastic’s industry-leading vector search capabilities for Retrieval-Augmented Generation (RAG) scenarios. Seamless Integration: Elastic Meets Microsoft Foundry This integration is a major milestone in our ongoing effort to foster an open, extensible AI ecosystem. With Elastic MCP Server now available in the Azure MCP registry, developers can easily connect their agents to Elastic’s powerful search and analytics engine using the Model Context Protocol (MCP). This ensures that agents built on Microsoft Foundry are grounded in trusted, enterprise-grade data - delivering accurate, relevant, and verifiable responses. Create Elastic cloud hosted deployments or Serverless Search Projects through the Microsoft Marketplace or the Azure Portal Discoverability: Elastic MCP Server is listed as a remote MCP server in the Azure MCP Registry and the Foundry Tool Catalog. Multi-Agent Workflows: Enable collaborative agent scenarios via the A2A protocol. Unlocking Vector Search for RAG Elastic’s advanced vector search capabilities are now natively accessible to Foundry agents, enabling powerful Retrieval-Augmented Generation (RAG) workflows: Semantic Search: Agents can perform hybrid and vector-based searches over enterprise data, retrieving the most relevant context for grounding LLM responses. Customizable Retrieval: With Elastic’s Agent Builder, you can define your custom tools specific to your indices and datasets and expose them to Foundry Agents via MCP. Enterprise Grounding: Ensure agent outputs are always based on proprietary, up-to-date data, reducing hallucinations and improving trust. Deployment: Getting Started Follow these steps to integrate Elastic MCP Server with your Foundry agents: Within your Foundry project, you can either: Go to Build in the top menu, then select Tools. Click on Connect a Tool. Select the Catalog tab, search for Elasticsearch, and click Create. Once prompted, configure the Elasticsearch details by providing a name, your Kibana endpoint, and your Elasticsearch API key. Click on Use in an agent and select an existing Agent to integrate Elastic MCP Server. Alternatively, within your Agent: Click on Tools. Click Add, then select Custom. Search for Elasticsearch, add it, and configure the tool as described above. The tool will now appear in your Agent’s configuration. You are all set to now interact with your Elasticsearch projects and deployments! Conclusion & Next Steps The addition of Elastic MCP Server to the Foundry Tool Catalog empowers developers to build the next generation of intelligent, grounded AI agents - combining Microsoft’s agentic platform with Elastic’s cutting-edge vector search. Whether you’re building RAG-powered copilots, automating workflows, or orchestrating multi-agent systems, this integration accelerates your journey from prototype to production. Ready to get started? Get started with Elastic via the Azure Marketplace or Azure portal. New users get a 7-day free trial! Explore agent creation in Microsoft Foundryportal and try the Foundry Tool Catalog. Deep dive into Elastic MCP and Agent Builder Join us at Microsoft Ignite 2025 for live demos, deep dives, and more on building agentic AI with Elastic and Microsoft Foundry!519Views1like2Comments