Forum Widgets
Latest Discussions
Code Interpreter Container Failing (Timeout) on Create EastUS2
Hello, we've been using code interpreter reliably for over a year, but starting on 2/21/2026, containers created in our EastUS2 Foundry instance will intermittently fail. It is easy to reproduce this, by running the below cURL. Try to run it a few times, and it will succeed the first 1-4 times, and then you will hit the timeout. Sometimes the timeout occurs on the second create, sometimes on the 3rd, 4th or 5th. During the timeout, if any other requests to create a container are made, they also hang. This impacts all users if code interpreter is set to auto container creation, with the tool enabled, during normal chat. For now we've redeployed resources in US West and do not get the error but do not have quota on more advanced models there so need this resolved ASAP. curl -X POST "https://[redacted].cognitiveservices.azure.com/openai/v1/containers" \ -H "Content-Type: application/json" \ -H "api-key: [redacted]" \ -d '{ "name": "test-container-eastus2-repro", "expires_after": { "anchor": "last_active_at", "minutes": 20 } }'a-developerFeb 23, 2026Copper Contributor145Views0likes2CommentsTitle: Synthetic Dataset Format from AI Foundry Not Compatible with Evaluation Schema
Current Situation The synthetic dataset created from AI Foundry Data Synthetic Data is generated in the following messages format { "messages": [ { "role": "system", "content": "You are a helpful assistant" }, { "role": "user", "content": "What is the primary purpose?" }, { "role": "assistant", "content": "The primary purpose is..." } ] } Challenge When attempting evaluation, especially RAG evaluation, the documentation indicates that the dataset must contain structured fields such as question - The query being asked ground_truth - The expected answer Recommended additional fields reference_context metadata Example required format { "question": "", "ground_truth": "", "reference_context": "", "metadata": { "document": "" } } Because the synthetic dataset is in messages format, I am unable to directly map it to the required evaluation schema. Question Is there a recommended or supported way to convert the synthetic dataset generated in AI Foundry messages format into the structured format required for evaluation? Can the user role be mapped to question? Can the assistant role be mapped to ground_truth? Is there any built in transformation option within AI Foundry?parulpaul01Feb 13, 2026Copper Contributor41Views0likes0CommentsFoundry Agent deployed to Copilot/Teams Can't Display Images Generated via Code Interpreter
Hello everyone, I’ve been developing an agent in the new Microsoft Foundry and enabled the Code Interpreter tool for it. In Agent Playground, I can successfully start a new chat and have the agent generate a chart/image using Code Interpreter. This works as expected in both the old and new Foundry experiences. However, after publishing the agent to Copilot/Teams for my organization, the same prompt that works in Agent Playground does not function properly. The agent appears to execute the code, but the image is not accessible in Teams. When reviewing the agent traces (via the Traces tab in Foundry), I can see that the agent generates a link to the image in the Code Interpreter sandbox environment, for example: `[Download the bar chart](sandbox:/mnt/data/bar_chart.png)` This works correctly within Foundry, but the sandbox path is not accessible from Teams, so the link fails there. Is there an officially supported way to surface Code Interpreter–generated files/images when the agent is deployed to Copilot/Teams, or is the recommended approach perhaps to implement a custom tool that uploads generated files to an external storage location (e.g., SharePoint, Blob Storage, or another file hosting service) and returns a publicly accessible link instead? I've been having trouble finding anything about this online. Any guidance would be greatly appreciated. Thank you!80Views0likes0CommentsNew Foundry Agent Issue
Hi all, I’m creating my first agent via New Foundry, so my questions are probably basic. As always, everything seemed straightforward… until deployment. I created an agent using gpt-4.1, added a list of instructions, and then used the Tools → Upload files functionality to attach a selection of reference documents. Everything worked perfectly in Preview mode. I then used the default option to Create a bot service, and it deployed successfully. To test it, I used the Individual Scope option (with the intention to share later with a couple of people — I haven’t worked that part out yet). Like magic, it appeared in my Teams and M365 Copilot, which was amazing… and then I ran my first search. It thought for a long time and then returned an error. In Co-pilot: and Teams: Nothing happens at all I’ve looked around for help but drawn a blank. I’m fairly sure it’s some kind of permissioning / access issue somewhere, but I can’t find where. Any help would be hugely appreciated.NewStarterKickoffFeb 12, 2026Copper Contributor30Views0likes0CommentsIs there a way to connect 2 Ai foundry to the same cosmos containers?
I defined Azure AI Foundry Connection for Azure Cosmos DB and BYO Thread Storage in Azure AI Agent Service by using these instructions: Integration with Azure AI Agent Service - Azure Cosmos DB for NoSQL | Microsoft Learn I see that it created 3 containers under the cosmos I provided: <guid>-agent-entity-store v-system-thread-message-store <guid>-thread-message-store Now I created another AI foundry and added a connection for the same AI foundry, and it created 3 different containers under the same DB. Is there a way that they'll use the same exact containers? I want to use multiple AI foundries, and they will use the same Cosmos containers to manage the data.43Views0likes0CommentsSearching for a simple guide to index SharePoint and publish an agent in Foundry
Hey all, Does anyone have a good guide or best practices for this setup in Foundry? SharePoint as data source GPT model (document + image indexing, ideally vectorized/embeddings) Create an Agent an Share the Agent Restrict access to Agent to specific users/groups only Looking for tutorials, examples, or real-world setups. Thanks!romanazurelabitFeb 02, 2026Copper Contributor56Views0likes0CommentsPublishing New Foundry Agent to M365 and Teams (Org scope)
Hello all, I've been trying to publish a small agent from new Foundry to M365 and Teams following the official documentation but I am missing something. Please help! The creation part of the agent is easy and I get to the point where I want to publish this to users with an Org scope: At this point, I would need to deploy the agent in Microsoft 365 Admin Center (MAC) to users. However when I open MAC, there is nothing to validate! My new agent doesn't appear anywhere in M365 Copilot or teams, for me of for my users. What am I missing?? Do I need to do something in Entra as well? Thanks!JMarcJan 14, 2026Copper Contributor217Views2likes4CommentsAzure Document Intelligence and Content Understanding
Hello, Our customer has dozens of Excel and PDF files. These files come in various formats, and the layouts may change over time. For example, some files provide data in a standard tabular structure, others use pivot-style Excel layouts, and some follow more complex or semi-structured formats. In total, we currently have approximately 150 distinct Excel templates and 80 distinct PDF templates. We need to extract information from these files and ingest it into normalized tables. Therefore, our requirement is to automatically infer the structure of each file, extract the required values, and load the results into Databricks tables. Given that there are already many template variations—and that new templates may emerge over time—what would be the recommended pipeline, technology stack, and architecture? Should we prefer Azure Document Intelligence? One option would be to create a custom model per template type. However, when a user uploads a new file, how can we reliably match the file to the correct existing model? Additionally, what should happen if a user uploads an Excel/PDF file in a significantly different format that does not resemble any existing template?rlxnw84Jan 14, 2026Copper Contributor183Views0likes1CommentOpen AI model continuity plan for Standard Deployments in Australia East
Hi, I am working with an Azure customer in Australia on Agentic AI solutions. We have provisioned standard deployments of GPT-4o in Aus East due to the customer's need for data sovereignty. We have recently noticed in the customer's Azure AI Foundry that the standard deployment of GPT-4o in Aus East has a model retirement date of 3rd June 2026. This is the most advanced OpenAI model available for this deployment type. What is Azure's plan for Open AI model availability for standard deployments in Aus East going forward? Will our customer have access to 4o or a replacement model? ThanksoslomanJan 13, 2026Copper Contributor224Views0likes1CommentPublished agent from Foundry doesn't work at all in Teams and M365
I've switched to the new version of Azure AI Foundry (New) and created a project there. Within this project, I created an Agent and connected two custom MCP servers to it. The agent works correctly inside Foundry Playground and responds to all test queries as expected. My goal was to make this agent available for my organization in Microsoft Teams / Microsoft 365 Copilot, so I followed all the steps described in the official Microsoft documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Issue description The first problems started at Step 8 (Publishing the agent). Organization scope publishing I published the agent using Organization scope. The agent appeared in Microsoft Admin Center in the list of agents. However, when an administrator from my organization attempted to approve it, the approval always failed with a generic error: “Sorry, something went wrong” No diagnostic information, error codes, or logs were provided. We tried recreating and republishing the agent multiple times, but the result was always the same. Shared scope publishing As a workaround, I published the agent using Shared scope. In this case, the agent finally appeared in Microsoft Teams and Microsoft 365 Copilot. I can now see the agent here: Microsoft Teams → Copilot Microsoft Teams → Applications → Manage applications However, this revealed the main issue. Main problem The published agent cannot complete any query in Teams, despite the fact that: The agent works perfectly in Foundry Playground The agent responds correctly to the same prompts before publishing In Teams, every query results in messages such as: “Sorry, something went wrong. Try to complete a query later.” Simplification test To exclude MCP or instruction-related issues, I performed the following: Disabled all MCP tools Removed all complex instructions Left only a minimal system prompt: “When the user types 123, return 456” I then republished the agent. The agent appeared in Teams again, but the behavior did not change — it does not respond at all. Permissions warning in Teams When I go to: Teams → Applications → Manage Applications → My agent → View details I see a red warning label: “Permissions needed. Ask your IT admin to add InfoConnect Agent to this team/chat/meeting.” This message is confusing because: The administrator has already added all required permissions All relevant permissions were granted in Microsoft Entra ID Admin consent was provided Because of this warning, I also cannot properly share the agent with my colleagues. Additional observation I have a similar agent configured in Copilot Studio: It shows the same permissions warning However, that agent still responds correctly in Teams It can also successfully call some MCP tools This suggests that the issue is specific to Azure AI Foundry agents, not to Teams or tenant-wide permissions in general. Steps already taken to resolve the issue Configured all required RBAC roles in Azure Portal according to: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/rbac-foundry?view=foundry-classic During publishing, an agent-bot application was automatically created. I added my account to this bot with the Azure AI User role I also assigned Azure AI User to: The project’s Managed Identity The project resource itself Verified all permissions related to AI agents publishing in: Microsoft Admin Center Microsoft Teams Admin Center Simplified and republished the agent multiple times Deleted the automatically created agent-bot and allowed Foundry to recreate it Created a new Foundry project, configured several simple agents, and published them — the same issue occurs Tried publishing with different models: gpt-4.1, o4-mini Manually configured permissions in: Microsoft Entra ID → App registrations / Enterprise applications → API permissions Added both Delegated and Application permissions and granted Admin consent Added myself and my colleagues as Azure AI User in: Foundry → Project → Project users Followed all steps mentioned in this related discussion: https://techcommunity.microsoft.com/discussions/azure-ai-foundry-discussions/unable-to-publish-foundry-agent-to-m365-copilot-or-teams/4481420 Questions How can I make a Foundry agent work correctly in Microsoft Teams? Why does the agent fail to process requests in Teams while working correctly in Foundry? What does the “Permissions needed” warning actually mean for Foundry agents? How can I properly share the agent with other users in my organization? Any guidance, diagnostics, or clarification on the correct publishing and permission model for Foundry agents in Teams would be greatly appreciated.SolvedAlexeyPrudnikovJan 13, 2026Copper Contributor767Views1like4Comments
Tags
- AMA74 Topics
- AI Platform56 Topics
- TTS50 Topics
- azure ai23 Topics
- azure ai foundry22 Topics
- azure ai services18 Topics
- azure machine learning13 Topics
- azure12 Topics
- AzureAI11 Topics
- machine learning10 Topics