Forum Widgets
Latest Discussions
An agent that converses with Fabric Warehouse data using natural language.
I'm trying to create an agent that uses Microsoft Foundry to converse with data in a Microsoft Fabric warehouse using natural language. Initially, I tried using Azure AI Search as a tool, but it didn't work due to the 1000-item index limit. I suspect there might be a way to access the warehouse directly without using Azure AI Search, but I don't know how. Could you please tell me how to implement this? Thank you in advance.yyMar 18, 2026Copper Contributor3Views0likes0CommentsTypo in Azure Foundry Learn
Hi Microsoft Foundry, I am not sure if this is the right place to post this, but I just wanted to report that there is a typo on this specific page : https://learn.microsoft.com/en-us/azure/foundry/openai/supported-languages?tabs=dotnet-secure%2Csecure%2Cpython-entra&pivots=programming-language-python Have a nice day.BenjaminChou1120Mar 18, 2026Copper Contributor6Views0likes0Commentso3-deep-research is failed with the status incomplete with the reason as content filter
I working on an to do an deep research on internal data. I'm using currently the Azure OpenAI Responses API with MCP Tool. The underlying MCP server deployed into ACA with search and fetch tool with signatures in complaint with the specification (https://developers.openai.com/apps-sdk/build/mcp-server#company-knowledge-compatibility). OpenAI client created with 03-deep-research model with MCP tool, in a loop response status being checked. (https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/deep-research#remote-mcp-server-with-deep-research) Deep Research is being carried out for sometime, I could see in the log that handshake has been made, ListTools invoked, search tool is called post that fetch is called for the queries framed by the model.. But intermittently, the response status is becoming "incomplete" with incomplete reason as "content_filter". Otherwise the deep research is working fine. Not able identify the root cause as there is seems to be no way to identify what caused the content filtration whether its the prompt or completion. How to debug and check the root cause and rectify this ? Or is there known issue with the o3-deep-research model's intermediate reasoning completions Or search and fetch tool results are causing this ? I had uploaded a file made it available to MCP server, the search and fetch tool uses an Azure OpenAI agent to search the data using File Search and fetch tool gets the content of the file based on the id passed. For same file and same research topic the issue is not occurring always but intermittently.MurugatesMar 17, 2026Copper Contributor37Views0likes0CommentsCode Interpreter Container Failing (Timeout) on Create EastUS2
Hello, we've been using code interpreter reliably for over a year, but starting on 2/21/2026, containers created in our EastUS2 Foundry instance will intermittently fail. It is easy to reproduce this, by running the below cURL. Try to run it a few times, and it will succeed the first 1-4 times, and then you will hit the timeout. Sometimes the timeout occurs on the second create, sometimes on the 3rd, 4th or 5th. During the timeout, if any other requests to create a container are made, they also hang. This impacts all users if code interpreter is set to auto container creation, with the tool enabled, during normal chat. For now we've redeployed resources in US West and do not get the error but do not have quota on more advanced models there so need this resolved ASAP. curl -X POST "https://[redacted].cognitiveservices.azure.com/openai/v1/containers" \ -H "Content-Type: application/json" \ -H "api-key: [redacted]" \ -d '{ "name": "test-container-eastus2-repro", "expires_after": { "anchor": "last_active_at", "minutes": 20 } }'a-developerFeb 26, 2026Copper Contributor225Views0likes2CommentsError when creating Assistant in Microsoft Foundry using Fabric Data Agent
I am facing an issue when using a Microsoft Fabric Data Agent integrated with the new Microsoft Foundry, and I would like your assistance to investigate it. Scenario: 1. I created a Data Agent in Microsoft Fabric. 2. I connected this Data Agent as a Tool within a project in the new Microsoft Foundry. 3. I published the agent to Microsoft Teams and Copilot for Microsoft 365. 4. I configured the required Azure permissions, assigning the appropriate roles to the Foundry project Managed Identity (as shown in the attached evidence – Azure AI Developer and Azure AI User roles). Issue: When trying to use the published agent, I receive the following error: Response failed with code tool_user_error: Create assistant failed. If issue persists, please use following identifiers in any support request: ConversationId = PQbM0hGUvMF0X5EDA62v3-br activityId = PQbM0hGUvMF0X5EDA62v3-br|0000000 Additional notes: • Permissions appear to be correctly configured in Azure. • The error occurs during the assistant creation/execution phase via Foundry after publishing. • The same behavior occurs both in Teams and in Copilot for Microsoft 365. Could you please verify: • Whether there are any additional permissions required when using Fabric Data Agents as Tools in Foundry; • If there are any known limitations or specific requirements for publishing to Teams/Copilot M365; • And analyze the error identifiers provided above. I appreciate your support and look forward to your guidance on how to resolve this issue.Solved638Views0likes6CommentsTitle: Synthetic Dataset Format from AI Foundry Not Compatible with Evaluation Schema
Current Situation The synthetic dataset created from AI Foundry Data Synthetic Data is generated in the following messages format { "messages": [ { "role": "system", "content": "You are a helpful assistant" }, { "role": "user", "content": "What is the primary purpose?" }, { "role": "assistant", "content": "The primary purpose is..." } ] } Challenge When attempting evaluation, especially RAG evaluation, the documentation indicates that the dataset must contain structured fields such as question - The query being asked ground_truth - The expected answer Recommended additional fields reference_context metadata Example required format { "question": "", "ground_truth": "", "reference_context": "", "metadata": { "document": "" } } Because the synthetic dataset is in messages format, I am unable to directly map it to the required evaluation schema. Question Is there a recommended or supported way to convert the synthetic dataset generated in AI Foundry messages format into the structured format required for evaluation? Can the user role be mapped to question? Can the assistant role be mapped to ground_truth? Is there any built in transformation option within AI Foundry?parulpaul01Feb 13, 2026Copper Contributor60Views0likes0CommentsFoundry Agent deployed to Copilot/Teams Can't Display Images Generated via Code Interpreter
Hello everyone, I’ve been developing an agent in the new Microsoft Foundry and enabled the Code Interpreter tool for it. In Agent Playground, I can successfully start a new chat and have the agent generate a chart/image using Code Interpreter. This works as expected in both the old and new Foundry experiences. However, after publishing the agent to Copilot/Teams for my organization, the same prompt that works in Agent Playground does not function properly. The agent appears to execute the code, but the image is not accessible in Teams. When reviewing the agent traces (via the Traces tab in Foundry), I can see that the agent generates a link to the image in the Code Interpreter sandbox environment, for example: `[Download the bar chart](sandbox:/mnt/data/bar_chart.png)` This works correctly within Foundry, but the sandbox path is not accessible from Teams, so the link fails there. Is there an officially supported way to surface Code Interpreter–generated files/images when the agent is deployed to Copilot/Teams, or is the recommended approach perhaps to implement a custom tool that uploads generated files to an external storage location (e.g., SharePoint, Blob Storage, or another file hosting service) and returns a publicly accessible link instead? I've been having trouble finding anything about this online. Any guidance would be greatly appreciated. Thank you!116Views0likes0CommentsNew Foundry Agent Issue
Hi all, I’m creating my first agent via New Foundry, so my questions are probably basic. As always, everything seemed straightforward… until deployment. I created an agent using gpt-4.1, added a list of instructions, and then used the Tools → Upload files functionality to attach a selection of reference documents. Everything worked perfectly in Preview mode. I then used the default option to Create a bot service, and it deployed successfully. To test it, I used the Individual Scope option (with the intention to share later with a couple of people — I haven’t worked that part out yet). Like magic, it appeared in my Teams and M365 Copilot, which was amazing… and then I ran my first search. It thought for a long time and then returned an error. In Co-pilot: and Teams: Nothing happens at all I’ve looked around for help but drawn a blank. I’m fairly sure it’s some kind of permissioning / access issue somewhere, but I can’t find where. Any help would be hugely appreciated.NewStarterKickoffFeb 12, 2026Copper Contributor63Views0likes0CommentsIs there a way to connect 2 Ai foundry to the same cosmos containers?
I defined Azure AI Foundry Connection for Azure Cosmos DB and BYO Thread Storage in Azure AI Agent Service by using these instructions: Integration with Azure AI Agent Service - Azure Cosmos DB for NoSQL | Microsoft Learn I see that it created 3 containers under the cosmos I provided: <guid>-agent-entity-store v-system-thread-message-store <guid>-thread-message-store Now I created another AI foundry and added a connection for the same AI foundry, and it created 3 different containers under the same DB. Is there a way that they'll use the same exact containers? I want to use multiple AI foundries, and they will use the same Cosmos containers to manage the data.70Views0likes0CommentsPublishing New Foundry Agent to M365 and Teams (Org scope)
Hello all, I've been trying to publish a small agent from new Foundry to M365 and Teams following the official documentation but I am missing something. Please help! The creation part of the agent is easy and I get to the point where I want to publish this to users with an Org scope: At this point, I would need to deploy the agent in Microsoft 365 Admin Center (MAC) to users. However when I open MAC, there is nothing to validate! My new agent doesn't appear anywhere in M365 Copilot or teams, for me of for my users. What am I missing?? Do I need to do something in Entra as well? Thanks!JMarcFeb 10, 2026Copper Contributor268Views2likes4Comments
Tags
- AMA74 Topics
- AI Platform56 Topics
- TTS50 Topics
- azure ai24 Topics
- azure ai foundry23 Topics
- azure ai services18 Topics
- azure machine learning13 Topics
- azure12 Topics
- AzureAI11 Topics
- machine learning10 Topics