azure ai foundry
161 TopicsFoundry Agent Service at Ignite 2025: Simple to Build. Powerful to Deploy. Trusted to Operate.
The upgraded Foundry Agent Service delivers a unified, simplified platform with managed hosting, built-in memory, tool catalogs, and seamless integration with Microsoft Agent Framework. Developers can now deploy agents faster and more securely, leveraging one-click publishing to Microsoft 365 and advanced governance features for streamlined enterprise AI operations.2.5KViews0likes1CommentIntroducing Microsoft Agent Factory
Microsoft Agent Factory is a new program designed for organizations that want to move from experimentation to execution faster. With a single plan, organizations can build agents with Work IQ, Fabric IQ, and Foundry IQ using Microsoft Foundry and Copilot Studio. They can also deploy their agents anywhere, including Microsoft 365 Copilot, with no upfront licensing and provisioning required. Eligible organizations can also tap into hands-on engagement from top AI Forward Deployed Engineers (FDEs) and access tailored role-based training to boost AI fluency across teams.9.8KViews6likes0CommentsFoundry IQ: Unlocking ubiquitous knowledge for agents
Introducing Foundry IQ by Azure AI Search in Microsoft Foundry. Foundry IQ is a centralized knowledge layer that connects agents to data with the next generation of retrieval-augmented generation (RAG). Foundry IQ includes the following features: Knowledge bases: Available directly in the new Foundry portal, knowledge bases are reusable, topic-centric collections that ground multiple agents and applications through a single API. Automated indexed and federated knowledge sources – Expand what data an agent can reach by connecting to both indexed and remote knowledge sources. For indexed sources, Foundry IQ delivers automatic indexing, vectorization, and enrichment for text, images, and complex documents. Agentic retrieval engine in knowledge bases – A self-reflective query engine that uses AI to plan, select sources, search, rank and synthesize answers across sources with configurable “retrieval reasoning effort.” Enterprise-grade security and governance – Support for document-level access control, alignment with existing permissions models, and options for both indexed and remote data. Foundry IQ is available in public preview through the new Foundry portal and Azure portal with Azure AI Search. Foundry IQ is part of Microsoft's intelligence layer with Fabric IQ and Work IQ.6.7KViews1like0CommentsAI Toolkit Extension Pack for Visual Studio Code: Ignite 2025 Update
Unlock the Latest Agentic App Capabilities The Ignite 2025 update delivers a major leap forward for the AI Toolkit extension pack in VS Code, introducing a unified, end-to-end environment for building, visualizing, and deploying agentic applications to Microsoft Foundry, and the addition of Anthropic’s frontier Claude models in the Model Catalog! This release enables developers to build and debug locally in VS Code, then deploy to the cloud with a single click. Seamlessly switch between VS Code and the Foundry portal for visualization, orchestration, and evaluation, creating a smooth roundtrip workflow that accelerates innovation and delivers a truly unified AI development experience. Download the http://aka.ms/aitoolkit today and start building next-generation agentic apps in VS Code! What Can You Do with the AI Toolkit Extension Pack? Access Anthropic models in the Model Catalog Following the Microsoft, NVIDIA and Anthropic strategic partnerships announcement today, we are excited to share that Anthropic’s frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5, are now integrated into the AI Toolkit, providing even more choices and flexibility when building intelligent applications and AI agents. Build AI Agents Using GitHub Copilot Scaffold agent applications using best-practice patterns, tool-calling examples, tracing hooks, and test scaffolds, all powered by Copilot and aligned with the Microsoft Agent Framework. Generate agent code in Python or .NET, giving you flexibility to target your preferred runtime. Build and Customize YAML Workflows Design YAML-based workflows in the Foundry portal, then continue editing and testing directly in VS Code. To customize your YAML-based workflows, instantly convert it to Agent Framework code using GitHub Copilot. Upgrade from declarative design to code-first customization without starting from scratch. Visualize Multi-Agent Workflows Envision your code-based agent workflows with an interactive graph visualizer that reveals each component and how they connect Watch in real-time how each node lights up as you run your agent. Use the visualizer to understand and debug complex agent graphs, making iteration fast and intuitive. Experiment, Debug, and Evaluate Locally Use the Hosted Agents Playground to quickly interact with your agents on your development machine. Leverage local tracing support to debug reasoning steps, tool calls, and latency hotspots—so you can quickly diagnose and fix issues. Define metrics, tasks, and datasets for agent evaluation, then implement metrics using the Foundry Evaluation SDK and orchestrate evaluations runs with the help of Copilot. Seamless Integration Across Environments Jump from Foundry Portal to VS Code Web for a development environment in your preferred code editor setting. Open YAML workflows, playgrounds, and agent templates directly in VS Code for editing and deployment. How to Get Started Install the AI Toolkit extension pack from the VS Code marketplace. Check out documentation. Get started with building workflows with Microsoft Foundry in VS Code 1. Work with Hosted (Pro-code) Agent workflows in VS Code 2. Work with Declarative (Low-code) Agent workflows in VS Code Feedback & Support Try out the extensions and let us know what you think! File issues or feedback on our GitHub repo for Foundry extension and AI Toolkit extension. Your input helps us make continuous improvements.Announcing Elastic MCP Server in Microsoft Foundry Tool Catalog
Introduction The future of enterprise AI is agentic - driven by intelligent, context-aware agents that deliver real business value. Microsoft Foundry is committed to enabling developers with the tools and integrations they need to build, deploy, and govern these advanced AI solutions. Today, we are excited to announce that Elastic MCP Server is now discoverable in the Microsoft Foundry Tool Catalog, unlocking seamless access to Elastic’s industry-leading vector search capabilities for Retrieval-Augmented Generation (RAG) scenarios. Seamless Integration: Elastic Meets Microsoft Foundry This integration is a major milestone in our ongoing effort to foster an open, extensible AI ecosystem. With Elastic MCP Server now available in the Azure MCP registry, developers can easily connect their agents to Elastic’s powerful search and analytics engine using the Model Context Protocol (MCP). This ensures that agents built on Microsoft Foundry are grounded in trusted, enterprise-grade data - delivering accurate, relevant, and verifiable responses. Create Elastic cloud hosted deployments or Serverless Search Projects through the Microsoft Marketplace or the Azure Portal Discoverability: Elastic MCP Server is listed as a remote MCP server in the Azure MCP Registry and the Foundry Tool Catalog. Multi-Agent Workflows: Enable collaborative agent scenarios via the A2A protocol. Unlocking Vector Search for RAG Elastic’s advanced vector search capabilities are now natively accessible to Foundry agents, enabling powerful Retrieval-Augmented Generation (RAG) workflows: Semantic Search: Agents can perform hybrid and vector-based searches over enterprise data, retrieving the most relevant context for grounding LLM responses. Customizable Retrieval: With Elastic’s Agent Builder, you can define your custom tools specific to your indices and datasets and expose them to Foundry Agents via MCP. Enterprise Grounding: Ensure agent outputs are always based on proprietary, up-to-date data, reducing hallucinations and improving trust. Deployment: Getting Started Follow these steps to integrate Elastic MCP Server with your Foundry agents: Within your Foundry project, you can either: Go to Build in the top menu, then select Tools. Click on Connect a Tool. Select the Catalog tab, search for Elasticsearch, and click Create. Once prompted, configure the Elasticsearch details by providing a name, your Kibana endpoint, and your Elasticsearch API key. Click on Use in an agent and select an existing Agent to integrate Elastic MCP Server. Alternatively, within your Agent: Click on Tools. Click Add, then select Custom. Search for Elasticsearch, add it, and configure the tool as described above. The tool will now appear in your Agent’s configuration. You are all set to now interact with your Elasticsearch projects and deployments! Conclusion & Next Steps The addition of Elastic MCP Server to the Foundry Tool Catalog empowers developers to build the next generation of intelligent, grounded AI agents - combining Microsoft’s agentic platform with Elastic’s cutting-edge vector search. Whether you’re building RAG-powered copilots, automating workflows, or orchestrating multi-agent systems, this integration accelerates your journey from prototype to production. Ready to get started? Get started with Elastic via the Azure Marketplace or Azure portal. New users get a 7-day free trial! Explore agent creation in Microsoft Foundryportal and try the Foundry Tool Catalog. Deep dive into Elastic MCP and Agent Builder Join us at Microsoft Ignite 2025 for live demos, deep dives, and more on building agentic AI with Elastic and Microsoft Foundry!235Views1like0CommentsFoundry IQ: boost response relevance by 36% with agentic retrieval
The latest RAG performance evaluations and results for knowledge bases and built-in agentic retrieval engine. Foundry IQ by Azure AI Search is a unified knowledge layer for agents, designed to improve response performance, automate RAG workflows and enable enterprise-ready grounding. These evaluations tested RAG performance for knowledge bases and new features including retrieval reasoning effort and federated sources like web and SharePoint for M365. Foundry IQ and Azure AI Search are part of Microsoft Foundry.948Views1like0CommentsRosettaFold3 Model at Ignite 2025: Extending Frontier of Biomolecular Modeling in Microsoft Foundry
Today at Microsoft Ignite 2025, we are excited to launch RosettaFold3 (RF3) on Microsoft Foundry - making a new generation of multi-molecular structure prediction models available to researchers, biotech innovators, and scientific teams worldwide. RF3 was developed by the Baker lab and DiMaio lab from the Institute for Protein Design (IPD) at the University of Washington, in collaboration with Microsoft’s AI for Good lab and other research partners. RF3 is now available in Foundry Models, offering scalable access to a new generation of biomolecular modeling capabilities. Try RF3 now in Foundry Models A new multi-molecular modeling system, now accessible in Foundry Models RF3 represents a leap forward in biomolecular structure prediction. Unlike previous generation models focused narrowly on proteins, RF3 can jointly model: Proteins (enzymes, antibodies, peptides) Nucleic acids (DNA, RNA) Small molecules/ligands Multi-chain complexes This unified modeling approach allows researchers to explore entire interaction systems—protein–ligand docking, protein–RNA assembly, protein–DNA binding, and more—in a single end-to-end workflow. Key advances in RF3 RF3 incorporates several advancements in protein and complex prediction, making it the state-of-the-art open-source model. Joint atom-level modeling across molecular types RF3 can simultaneously model all atom types across proteins, nucleic acids, and ligands—enabled by innovations in multimodal transformers and generative diffusion models. Unprecedented control: atom-level conditioning Users can provide the 3D structure of a ligand or compound, and RF3 will fold a protein around it. This atom-level conditioning unlocks: Targeted drug-design workflows Protein pocket and surface engineering Complex interaction modeling Example showing how RF3 allows conditioning on user inputs offering greater control of the model’s predictions. Broad templating support for structure-guided design RF3 allows users to guide structure prediction using: Distance constraints Geometric templates Experimental data (e.g., cryo-EM) This flexibility is limited in other models and makes RF3 ideal for hybrid computation–wet-lab workflows. Extensible foundation for scientific and industrial research RF3 can be adapted to diverse application areas—including enzyme engineering, materials science, agriculture, sustainability, and synthetic biology. Use cases RF3’s multimolecular modeling capabilities have broad applicability beyond fundamental biology. The model enables breakthroughs across medicine, materials science, sustainability, and defense—where structure-guided design directly translates into measurable innovation. Sector Illustrative Use Cases Medicine Gene therapy research: RF3 enables the design of custom proteins that bind specific DNA sequences for targeted genome repair. Materials Science Inspired by natural protein fibers such as wool and silk, IPD researchers are designing synthetic fibers with tunable mechanical properties and texture—enabling sustainable textiles and advanced materials. Sustainability RF3 supports enzyme design for plastic degradation and waste recycling, contributing to circular bioeconomy initiatives. Disease & Vaccine Development RF3-powered workflows will contribute to structure-guided vaccine design, building on IPD’s prior success with the SKYCovione COVID-19 nanoparticle vaccine developed with SK Bioscience and GSK. Crop Science and Food security Support for gene-editing technology (due to protein-DNA binding prediction capabilities) for agricultural research, design of small proteins called Anti-Microbial Peptides or Anti-Fungal peptides to fight crop diseases and tree diseases such as citrus greening. Defense & Biosecurity Enables detection and rapid countermeasure design against toxins or novel pathogens; models of this class are being studied for biosafety applications (Horvitz et al., Science, 2025). Aerospace & Extreme Environments Supports design of lightweight, self-healing, and radiation-resistant biomaterials capable of functioning under non-terrestrial conditions (e.g., high temperature, pressure, or radiation exposure). RF3 has the potential to lower the cost of exploratory modeling, raise success rates in structure-guided discovery, and expand biomolecular AI into domains that were previously limited by sparse experimental structures or difficult multimolecular interactions. Because the model and training framework are open and extensible, partners can also adapt RF3 for their own research, making it a foundation for the next generation of biomolecular AI on Microsoft Foundry. Get started today RosettaFold3 (RF3) brings advanced multimolecular modeling capabilities into Foundry Models, enabling researchers and biotech teams to run structure-guided workflows with greater flexibility and speed. Within Microsoft Foundry, you can integrate RF3 into your existing scientific processes—combining your data, templates, and downstream analysis tools in one connected environment. Start exploring the next frontier of biomolecular modeling with RosettaFold3 in the Foundry Models. You can also discover other early-stage AI innovations in Foundry Labs. If you’re attending Microsoft Ignite 2025, or watching on demand, be sure to check out our session: Session: AI Frontier in Foundry Labs: Experiment Today, Lead Tomorrow About the session: “Curious about the next wave of AI breakthroughs? Get a sneak peek into the future of AI with Azure AI Foundry Labs—your front door to experimental models, multi-agent orchestration prototypes, Agent Factory blueprints, and edge innovations. If you’re a researcher eager to test, validate, and influence what’s next in enterprise AI, this session is your launchpad. See how Labs lets you experiment fast, collaborate with innovators, and turn new ideas into real impact.”171Views0likes0CommentsIssue with: Connected Agent Tool Forcing from an Orchestrator Agent
Hi Team, I am trying to force tool selection for my Connected Agents from an Orchestrator Agent for my Multi-Agent Model. Not sure if that is possible Apologies in advance for too much detail as I really need this to work! Please let me know if there is a flaw in my approach! The main intention behind going towards Tool forcing was because with current set of instructions provided to my Orchestrator Agent, It was providing hallucinated responses from my Child Agents for each query. I have an Orchestrator Agent which is connected to the following Child Agents (Each with detailed instructions) Child Agent 1 - Connects to SQL DB in Fabric to fetch information from Log tables. Child Agent 2 - Invokes OpenAPI Action tool for Azure Functions to run pipelines in Fabric. I have provided details on 3 approaches. Approach 1: I have checked the MS docs "CONNECTED_AGENT" is a valid property for ToolChoiceType "https://learn.microsoft.com/en-us/python/api/azure-ai-agents/azure.ai.agents.models.agentsnamedtoolchoicetype?view=azure-python" Installed the latest Python AI Agents SDK Beta version as it also supports "Connected Agents": https://pypi.org/project/azure-ai-agents/1.2.0b6/#create-an-agent-using-another-agents The following code is integrated into a streamlit UI code. Python Code: agents_client = AgentsClient( endpoint=PROJECT_ENDPOINT, credential=DefaultAzureCredential( exclude_environment_credential=True, exclude_managed_identity_credential=True ) ) # ------------------------------------------------------------------- # UPDATE ORCHESTRATOR TOOLS (executed once) # ------------------------------------------------------------------- fabric_tool = ConnectedAgentTool( id=FABRIC_AGENT_ID, name="Fabric_Agent", description="Handles Fabric pipeline questions" ) openapi_tool = ConnectedAgentTool( id=OPENAPI_AGENT_ID, name="Fabric_Pipeline_Trigger", description="Handles OpenAPI pipeline triggers" ) # Update orchestrator agent to include child agent tools agents_client.update_agent( agent_id=ORCH_AGENT_ID, tools=[ fabric_tool.definitions[0], openapi_tool.definitions[0] ], instructions=""" You are the Master Orchestrator Agent. Use: - "Fabric_Agent" when the user's question includes: "Ingestion", "Trigger", "source", "Connection" - "Fabric_Pipeline_Trigger" when the question mentions: "OpenAPI", "Trigger", "API call", "Pipeline start" Only call tools when needed. Respond clearly and concisely. """ ) # ------------------------- TOOL ROUTING LOGIC ------------------------- def choose_tool(user_input: str): text = user_input.lower() if any(k in text for k in ["log", "trigger","pipeline","connection"]): return fabric_tool if any(k in text for k in ["openapi", "api call", "pipeline start"]): return openapi_tool # No forced routing → let orchestrator decide return None forced_tool = choose_tool(user_query) run = agents_client.runs.create_and_process( thread_id=st.session_state.thread.id, agent_id=ORCH_AGENT_ID, tool_choice={ "type": "connected_agent", "function": forced_tool.definitions[0] } Error: Azure.core.exceptions.HttpResponseError: (invalid_value) Invalid value: 'connected_agent'. Supported values are: 'code_interpreter', 'function', 'file_search', 'openapi', 'azure_function', 'azure_ai_search', 'bing_grounding', 'bing_custom_search', 'deep_research', 'sharepoint_grounding', 'fabric_dataagent', 'computer_use_preview', and 'image_generation'. Code: invalid_value Message: Invalid value: 'connected_agent'. Supported values are: 'code_interpreter', 'function', 'file_search', 'openapi', 'azure_function', 'azure_ai_search', 'bing_grounding', 'bing_custom_search', 'deep_research', 'sharepoint_grounding', 'fabric_dataagent', 'computer_use_preview', and 'image_generation'." Approach 2: Create ConnectedAgentTool as you do, and pass its definitions to update_agent(...). Force a tool by name using tool_choice={"type": "function", "function": {"name": "<tool-name>"}}. Do not set type: "connected_agent" anywhere—there is no such tool_choice.type. Code: from azure.identity import DefaultAzureCredential from azure.ai.agents import AgentsClient # Adjust imports to your SDK layout: # e.g., from azure.ai.agents.tool import ConnectedAgentTool agents_client = AgentsClient( endpoint=PROJECT_ENDPOINT, credential=DefaultAzureCredential( exclude_environment_credential=True, exclude_managed_identity_credential=True # keep your current credential choices ) ) # ------------------------------------------------------------------- # CREATE CONNECTED AGENT TOOLS (child agents exposed as function tools) # ------------------------------------------------------------------- fabric_tool = ConnectedAgentTool( id=FABRIC_AGENT_ID, # the **child agent ID** you created elsewhere name="Fabric_Agent", # **tool name** visible to the orchestrator description="Handles Fabric pipeline questions" ) openapi_tool = ConnectedAgentTool( id=OPENAPI_AGENT_ID, # another child agent ID name="Fabric_Pipeline_Trigger", # tool name visible to the orchestrator description="Handles OpenAPI pipeline triggers" ) # ------------------------------------------------------------------- # UPDATE ORCHESTRATOR: attach child tools # ------------------------------------------------------------------- # NOTE: definitions is usually a list of ToolDefinition objects produced by the helper agents_client.update_agent( agent_id=ORCH_AGENT_ID, tools=[ fabric_tool.definitions[0], openapi_tool.definitions[0] ], instructions=""" You are the Master Orchestrator Agent. Use: - "Fabric_Agent" when the user's question includes: "Ingestion", "Trigger", "source", "Connection" - "Fabric_Pipeline_Trigger" when the question mentions: "OpenAPI", "Trigger", "API call", "Pipeline start" Only call tools when needed. Respond clearly and concisely. """ ) # ------------------------- TOOL ROUTING LOGIC ------------------------- def choose_tool(user_input: str): text = user_input.lower() if any(k in text for k in ["log", "trigger", "pipeline", "connection"]): return "Fabric_Agent" # return the **tool name** if any(k in text for k in ["openapi", "api call", "pipeline start"]): return "Fabric_Pipeline_Trigger" # return the **tool name** return None forced_tool_name = choose_tool(user_query) # ------------------------- RUN INVOCATION ------------------------- if forced_tool_name: # FORCE a specific connected agent by **function name** run = agents_client.runs.create_and_process( thread_id=st.session_state.thread.id, agent_id=ORCH_AGENT_ID, tool_choice={ "type": "function", # <-- REQUIRED "function": { "name": forced_tool_name # <-- must match the tool's name as registered } } ) else: # Let the orchestrator auto-select (no tool_choice → "auto") run = agents_client.runs.create_and_process( thread_id=st.session_state.thread.id, agent_id=ORCH_AGENT_ID ) Error: azure.core.exceptions.HttpResponseError: (None) Invalid tool_choice: Fabric_Agent. You must also pass this tool in the 'tools' list on the Run. Code: None Message: Invalid tool_choice: Fabric_Agent. You must also pass this tool in the 'tools' list on the Run. Approach 3: Modified version of the 2nd Approach with Took Definitions call: # ------------------------- TOOL ROUTING LOGIC ------------------------- def choose_tool(user_input: str): text = user_input.lower() if any(k in text for k in ["log", "trigger","pipeline","connection"]): # return "Fabric_Agent" return ( "Fabric_Agent", fabric_tool.definitions[0] ) if any(k in text for k in ["openapi", "api call", "pipeline start"]): # return "Fabric_Pipeline_Trigger" return ( "Fabric_Pipeline_Trigger", openapi_tool.definitions[0] ) # No forced routing → let orchestrator decide # return None return (None, None) # forced_tool = choose_tool(user_query) forced_tool_name, forced_tool_def = choose_tool(user_query) # ------------------------- ORCHESTRATOR CALL ------------------------- if forced_tool_name: tool_choice = { "type": "function", "function": { "name": forced_tool_name } } run = agents_client.runs.create_and_process( thread_id=st.session_state.thread.id, agent_id=ORCH_AGENT_ID, tool_choice=tool_choice, tools=[ forced_tool_def ] # << only the specific tool ) else: # no forced tool, orchestrator decides run = agents_client.runs.create_and_process( thread_id=st.session_state.thread.id, agent_id=ORCH_AGENT_ID ) Error: TypeError: azure.ai.agents.operations._patch.RunsOperations.create() got multiple values for keyword argument 'tools'25Views0likes0Comments