Recent Discussions
The Business Foundation: Why Most Companies Aren’t Ready for Agentic AI
Before agents can execute decisions, organizations must redesign how they structure responsibility, data, governance, and operational context before autonomy can scale. The enterprise AI landscape has shifted. Organizations are moving beyond chatbots and isolated predictive models toward systems that can plan, decide, and execute multi-step work across finance, engineering operations, supply chains, and customer service. Many analysts now expect agentic AI to unlock major productivity gains across knowledge work. But despite the momentum, adoption remains limited. As of 2025, only about 2% of organizations have deployed agent-based systems at real operational scale, while most remain stuck in pilots. The reason is not model capability. It is readiness. The Core Problem Most organizations still treat AI adoption as a technical rollout exercise and measure progress through deployment indicators such as copilots enabled, pilots launched, or models evaluated. These metrics reflect experimentation activity, but they do not show whether an organization is ready to operate systems that make decisions and execute actions inside business workflows. Agentic systems do more than generate insights; they participate directly in operational processes. The gap between deploying AI tools and safely delegating decision-making authority to them is where many transformation efforts begin to stall. True enterprise readiness for agentic AI is not defined by how many models an organization deploys or how many pilots it launches. It depends on whether the organization can safely delegate bounded decisions to autonomous systems. In practice, this requires: Strategy and decision scoping: identifying where autonomous execution creates value and where human oversight must remain in place Process and decision-system maturity: redesigning workflows for human-agent collaboration with clear escalation boundaries Context-ready data foundations: ensuring agents operate on consistent, policy-aware operational context rather than fragmented data silos Governance and accountability structures: defining what agents may recommend, execute, escalate, or never touch, supported by auditability and oversight Team readiness and lifecycle management: preparing teams to supervise autonomous execution and managing agents as ongoing operational participants rather than static tools Coordination architecture readiness: aligning multiple agents across domains so local optimization does not create organizational conflict This article explains why traditional enterprise environments are not yet prepared for autonomous agents, what true agentic readiness actually looks like in practice, and the sequence of organizational changes required before decision-capable systems can be deployed safely at scale. I. The Readiness Illusion and the Root Causes of Failure Most organizations are deploying agentic systems into environments designed exclusively for human execution. That mismatch produces predictable friction across five structural layers. 1. Fragmented Operational Context (The Data Problem) Enterprises have a lot of data. What they often lack is usable context. Traditional systems record what happened. Agents also need to understand why something happened, how systems are connected, and where policy limits apply. In most organizations, customer systems, telemetry platforms, identity services, and finance tools do not stay aligned in real time. As a result, agents operate across disconnected information rather than a shared operational picture. This creates real risk. With generative AI, poor data quality usually produces a weak answer. With agentic AI, poor data quality can produce the wrong action at scale. More APIs, more pipelines, and more dashboards do not fix this by themselves. Without a shared semantic context across systems, agents can still make decisions that are internally logical but operationally wrong. For example, an agent may see that a customer received a large discount and conclude that future discounts should be limited, while missing that the original discount was approved because of a service outage and a retention risk. The data is available, but the business meaning behind it is not. 2. Undocumented Decision Systems Most organizations document workflows. However, very few document decision authority clearly enough for autonomous execution. Agents need to know where they are allowed to act, when they must escalate, and which decisions remain human-only. Without these boundaries, organizations often follow the same pattern: the first unexpected situation appears, confidence drops, and the agent is switched off. This is not a model problem. It is a decision-structure problem. Before deploying agents, organizations must be able to explain which decisions can be delegated and who remains responsible for each step. Many cannot yet do this. 3. The Governance Paradox Agentic systems do not fit traditional governance models. Most organizations still assume a simple structure: user → application → resource Agent-based systems introduce a new layer: user → agent → tools → resource This change affects access control, compliance processes, and audit visibility. Organizations usually buy agents like software tools but must manage them more like team members. That gap is rarely addressed before deployment begins. This issue is already visible today. Many enterprises are using vendor copilots and embedded AI features inside business systems without clear ownership, audit coverage, or governance rules. This creates a growing “shadow AI” layer even before intentional agent programs start. 4. Identity and Accountability Ambiguity Many organizations cannot clearly answer a simple question: who is responsible when an agent makes a mistake? In practice, agents often receive permissions that are broader than necessary, execution traces are difficult to follow across multiple systems, and accountability is split between IT, compliance, and business teams. Without clear attribution, autonomy introduces hidden risk instead of efficiency. Delegation without accountability is not automation. It is unmanaged risk. 5. Organizational Misalignment Most transformation programs still assume employees will use AI as a tool. Agentic environments change the role of employees from operators to supervisors. People are expected to review outcomes, guide behavior, and manage exceptions instead of executing every step themselves. Research from BCG shows that around 70% of AI project challenges come from people and process issues rather than technology. Organizations that invest in change management are significantly more likely to see successful results. Organizational readiness is not something to address later. It is required before agents can operate safely. Common Failure Patterns at a Glance Common failure patterns like these are already visible in real deployments. The Klarna case illustrates the challenge well. After replacing several hundred customer service roles with AI, the company later reported lower resolution quality for complex cases, declining satisfaction scores, and higher escalation rates, which led to renewed hiring in support roles. The outcome did not point to a failure of the model itself. It highlighted what happens when autonomous systems are introduced without the supporting process, governance, and team structures required for sustained operation. II. Defining True Agentic Readiness Agentic readiness is not just about having the right tools in place. It is about whether the organization has the capability to use autonomous systems safely and effectively. Definition Agentic readiness is the ability to safely delegate bounded operational decisions to autonomous systems while maintaining accountability, observability, and policy alignment across the full execution chain. Research consistently shows that organizations benefit from AI only when multiple maturity layers advance together. The MIT CISR AI Maturity Model, based on data from 721 companies, demonstrates that financial performance improves as organizations progress through the stages. Companies in early stages often perform below industry averages, while those reaching later stages perform significantly better. The key insight is that maturity is cumulative. Organizations cannot skip foundational steps and still expect reliable outcomes. For agentic systems, those cumulative layers include strategy alignment, decision-ready processes, context-ready data, governance structures, organizational roles, and technical architecture. When only some of these elements are in place, organizations produce pilots. When they advance together, organizations produce transformation. From Activity Metrics to Outcome Metrics One of the clearest signs of readiness is how an organization measures progress. Organizations at an early stage usually focus on activity: Number of models deployed Pilots launched Features enabled User onboarding numbers and API call volume More mature organizations focus on outcomes: Better decision quality and fewer errors Higher throughput for clearly defined tasks Consistent operation within safe autonomy boundaries Complete audit trails and accurate escalation handling This is not a semantic distinction. Organizations measuring activity invest indefinitely in pilots because they have no signal telling them a pilot has succeeded or failed. The measurement framework is itself a prerequisite for the transformation sequence. III. The Transformation Sequence Most Organizations Skip Many organizations begin agent adoption in the wrong order. Platforms are procured before governance is defined. Models are evaluated before workflows are structured. Autonomy is introduced before decision authority is mapped. The result is not faster progress. It is earlier failure, followed by expensive cleanup later. In traditional cloud transformation, architecture precedes automation. Agentic transformation follows the same rule: decision structure must exist before delegation can scale. Step 1: Strategic Alignment and Decision Scoping Organizations should begin by identifying where autonomy creates value safely — not where it is technically possible and not where ambitions are highest. Strong early candidates usually share the same characteristics: structured decisions, bounded scope, reversible actions, and high execution frequency. Typical examples include incident triage and routing, capacity classification, environment status updates, and prioritization support. These are good starting points not because they are simple, but because failures are visible, recoverable, and useful for learning. Delegation should grow gradually from bounded decision spaces toward broader authority. Organizations that struggle often start with highly visible, high-risk use cases because the business case looks attractive. Organizations that succeed usually begin with frequent, lower-impact decisions where feedback loops are short and improvements can happen quickly. Step 2: Process Maturity and Boundary Setting Agents do not fix broken workflows. They execute them faster. If a process depends on informal judgment, tribal knowledge, or undocumented exception handling, an agent will reproduce those weaknesses at machine speed. Before introducing autonomy, organizations should establish structured runbooks with clear execution paths, explicit escalation logic an agent can evaluate, defined exception-handling rules that do not rely on intuition, and clear boundaries between decisions an agent may take and those that must remain with humans. This level of discipline requires documentation precision that many organizations have never needed before. A statement such as “the engineer uses judgment” is not a runbook step. It is an undocumented dependency that will later appear as an agent failure. This is also where leaders face a practical choice: add agents on top of fragile legacy workflows, or redesign those workflows so delegation can happen safely. In many cases, the second path is slower at first but far more durable. Step 3: Data Context and Decision Awareness Agents cannot operate reliably in fragmented environments. The solution is not simply collecting more data. What they require is decision-aware context: structured knowledge about relationships between systems, service dependencies, environment classification, policy boundaries, and operational intent. This is a different challenge from building analytics platforms. Analytics depends on broad visibility across large datasets. Agentic execution depends on precise, current, and consistent information at the moment a decision is made. A customer record that is accurate enough for reporting may not be reliable enough for an agent executing a contract action. Because of this difference, data readiness becomes a leadership concern rather than only an infrastructure task. Microsoft’s digital transformation guidance captures this clearly with the principle “no AI without data”: organizations should identify critical data sources, establish governance ownership, improve quality, and define controlled access before introducing agents into operational workflows. Step 4: Governance and Delegation Redesign Organizations must explicitly define four categories of agent authority before deployment: What agents may recommend (advisory boundary) What agents may execute autonomously (execution boundary) What requires human approval before execution (escalation boundary) What remains permanently restricted regardless of confidence (prohibition boundary) These policies cannot remain static. Agentic systems require continuous supervision, not periodic review. Research supports this shift. Studies of governance professionals working with autonomous systems show that adopting traditional Enterprise Risk Management frameworks alone does not significantly reduce governance incidents. What makes the difference is integrating human oversight into execution loops and strengthening machine identity security. In practice, this means organizations need a delegated-autonomy governance function: a cross-functional group with representation from IT, compliance, legal, and business teams that continuously defines and monitors the boundaries of agent behavior. This is different from extending existing approval committees. Governance must move from acting as a gate before deployment to operating as a supervision layer throughout the lifecycle of the agent. This creates a basic operational tension: organizations adopt agents to reduce manual work, but safe autonomy requires stronger supervision, better observability, and tighter control over identity and permissions — especially in the early stages. Step 5: Operating Model Redesign: Operationalizing Human-Agent Collaboration Agentic systems create responsibilities that do not yet exist in most organizations. This shift is not mainly about replacing people with agents. It is about redesigning how people work with them, supervise them, and remain accountable for outcomes. New operational roles typically include: Agent reliability engineers who monitor performance, detect degradation, and define retraining triggers Policy designers who translate business rules into machine-evaluable decision logic Workflow supervisors who oversee autonomous execution and handle escalations Context curators who maintain the data foundations agents depend on for accurate reasoning Organizations that succeed with agents do not treat them as static automation tools. They treat them as managed participants inside workflows. That is why they need an HR layer for agents. An HR layer for agents means applying the same lifecycle thinking used for people to autonomous systems. Before an agent is allowed to operate, it needs a clearly defined role, scope, level of authority, and access to the right systems. Once deployed, its performance must be reviewed over time, its behavior monitored, and its permissions adjusted when quality drops or risks increase. When the agent no longer fits the workflow, it should be retired or replaced instead of being left running by default. In practice, this means agent management should include: Onboarding, by defining scope, authority, and access boundaries Supervision, through observability, escalation paths, and performance review Retraining or re-scoping, when quality declines or conditions change Retirement, when the agent no longer fits the process or creates more risk than value In higher-risk workflows, this HR layer must also include graceful degradation. For example, an underperforming agent may automatically lose write access, be moved to read-only mode, and hand control back to a human supervisor until its behavior is corrected. This shift also requires leadership readiness. The Harvard 2025 Global Leadership Development Study found that 71% of senior leaders now see the ability to lead through continuous change as critical, yet only 36% say AI is fully integrated into their strategy. That gap between intention and execution is where many organizational transformation programs begin to stall. Step 6: Coordination Architecture Readiness As organizations deploy agents across multiple domains, a new challenge appears: agents begin optimizing locally instead of organizationally. An agent focused on cost efficiency in one area may conflict with another agent responsible for quality assurance elsewhere. Without coordination structures, these conflicts often remain invisible until they surface as operational failures. Coordination architecture helps align agent behavior across the organization. It ensures policy consistency between agents, maintains a shared understanding of the operational environment, prevents conflicts when actions intersect, and supports stable communication between agents working together across workflows. This capability is not required for the first agent deployment. It becomes important as soon as organizations begin operating multiple agents in parallel. Many organizations encounter coordination problems earlier than expected, which is why coordination readiness belongs in the transformation sequence even if its implementation happens later. Local optimization is rarely what enterprises intend. Coordination architecture is how you prevent it from becoming what they get. IV. The Regulatory Clock Is Already Running For organizations operating in or serving European markets, readiness is no longer only a strategic question. It is also a regulatory one. The EU AI Act’s high-risk provisions take effect in August 2026, with potential penalties reaching €35 million or 7% of global revenue. Colorado’s AI Act follows in June 2026, and a growing number of U.S. states now require documented AI governance programs for specific sectors and use cases. The governance and data foundations described earlier in this article are therefore not only best practice. For many organizations, they are becoming compliance prerequisites. Treating readiness as optional before deployment increasingly means accepting regulatory exposure before value is realized. The transformation sequence described here is not a slower path to deployment. It is the only path that avoids accumulating technical and legal risk at the same time. V. Conclusion: Shifting Toward Outcome-Based Pragmatism Agentic systems rarely fail because language models are incapable. They fail because they are introduced into environments designed for human execution, governed by frameworks built for deterministic software, and evaluated using metrics that cannot distinguish a promising pilot from a production-ready capability. The readiness gap is structural and, in many cases, self-inflicted. Organizations skip foundational steps because platform procurement is faster, more visible, and easier to justify internally than operating-model redesign. The result is earlier failure, higher remediation cost, and — in regulated industries — increasing legal exposure. What this means in practice Organizations should stop measuring readiness through activity indicators and start measuring it through decision quality, execution safety, throughput improvement, and bounded autonomy performance. Governance and data foundations must be established before platform rollout. Organizational transition planning must begin before deployment. Decision authority must be defined before the first agent workflow is introduced. Only then can enterprises safely unlock the productivity gains promised by agentic systems — not because the technology suddenly becomes capable, but because the organization becomes ready to use it. Up Next in This Series Part 2 looks at the cloud foundation needed for safe agent deployment, including identity-first architecture, observability, policy controls, and the platform constraints that often appear only after design decisions have been made. Part 3 focuses on how to design agents that work reliably in enterprise environments, including RAG maturity, loop design, multi-agent coordination, and human oversight built into the architecture from the start. References Weinberg, A. I. (2025). A Framework for the Adoption and Integration of Generative AI in Midsize Organizations and Enterprises (FAIGMOE). Patel, R. (2026). Agentic AI Frameworks: A Complete Enterprise Guide for 2026. Space-O Technologies. Microsoft Learn. Agentic AI maturity model. Keenan, K. (2026). How the right context can reshape agentic AI’s productivity output. Business Insider / Reltio. Ransbotham, S., Kiron, D., Khodabandeh, S., Iyer, S., & Das, A. (2025). The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI. MIT Sloan Management Review & Boston Consulting Group.71Views0likes0Commentso3-mini not returning reasoning tokens
Hi, I work on a service that leverages o3-mini via Microsoft Foundry. In the past few days, I've observed that when calling o3-mini via Microsoft Foundry, that completion_token_details always has the reasoning_tokens value set to 0, regardless of the reasoning setting being used. In my testing, it seems that the reasoning is still occurring, as increasing reasoning value causes the completion_tokens field to increase by a good amount, but none of the reasoning levels cause the reasoning_tokens value to be anything other than 0. Has anyone else encountered this issue? Thanks! Tom19Views0likes0CommentsEnabling Secure Access to Private Resources with Azure AI Foundry
One of our client’s key requirements was to build an AI agent that could securely access private resources without exposing any data over the public internet. To meet this requirement, we followed an architecture similar to the diagram above, leveraging Azure AI Foundry with private networking. How We Designed the Solution As shown in the diagram, all core services - Azure Storage, AI Search, Foundry, and Cosmos DB - are placed behind private endpoints within the client’s virtual network. This ensures that none of these resources are publicly accessible. We deployed the agent inside a dedicated subnet within the same VNet. This allowed the agent to communicate directly with these services through private endpoints, without any need to traverse the public internet. The private endpoint subnet acts as a secure bridge between the agent and the underlying Azure services. At the same time, the client has full control over the network, including the option to apply firewall rules to manage outbound traffic. Why This Approach Worked All communication between the agent and data sources stays within the private network. Sensitive data, including queries and retrieved content, never leaves the network boundary. Access to resources is controlled through private endpoints and proper authorization. This design removes the risks associated with public endpoints and ensures compliance with enterprise security requirements. Final Outcome Using this approach, we delivered an AI solution that is both secure and scalable. The client now has an agent that can safely interact with private data sources while maintaining full control over network traffic and access policies. To learn more about the configuration, follow this documentation: https://learn.microsoft.com/en-us/azure/foundry/agents/how-to/virtual-networks How are you currently securing your AI workloads when accessing sensitive data?37Views0likes0CommentsAn agent that converses with Fabric Warehouse data using natural language.
I'm trying to create an agent that uses Microsoft Foundry to converse with data in a Microsoft Fabric warehouse using natural language. Initially, I tried using Azure AI Search as a tool, but it didn't work due to the 1000-item index limit. I suspect there might be a way to access the warehouse directly without using Azure AI Search, but I don't know how. Could you please tell me how to implement this? Thank you in advance.37Views0likes1CommentMulti-agent systems on Azure: identity, monitoring, and security guardrails
I wrote this piece because I know security concerns around AI agents are one of the main things holding many companies back from getting started. There is a lot of excitement around what agents can do on Azure, especially as multi-agent systems become more practical to build. But for many teams, the real hesitation starts when questions come up around trust, identity, permissions, monitoring, and what happens when something goes wrong. This PDF is my attempt to break that down in a practical way, from an Azure architect’s perspective: what multi-agent systems are, where they can fail, and which security layers matter most if you want to build them responsibly in Azure. It is based on hands-on architecture experience, Microsoft guidance, and recent security thinking around agentic systems. Read it on https://medium.com/@SCSA_MJ/multi-agent-systems-on-azure-identity-monitoring-and-security-guardrails-b8b7c82a0c57:54Views0likes0CommentsEstablish an Oracle Database Connection hosted on Azure VM via AI Foundry Agent
I have came across a requirement to create a AI Foundry agent that will accept requests from user like below: a. "I want to connect to abcprd database hosted on subscription sub1, and resource group rg1 and check the AWR report from xAM-yPM on a specific date (eg 21-Oct-2025) b. Check locking session/RMAN backup failures/active sessions from the database abcprd hosted on subscription sub1, and resource group rg1. The agent should be able to fetch the relevant query from knowledge base . connect to the database and run the report for the duration mentioned. It should then fetch the report and pass it to the LLM (GPT 4.1 in our case) for investigations. I am looking for approach to connect to the oracle database based on user's request and execute the query obtained from knowledge base.107Views0likes2CommentsTitle: Synthetic Dataset Format from AI Foundry Not Compatible with Evaluation Schema
Current Situation The synthetic dataset created from AI Foundry Data Synthetic Data is generated in the following messages format { "messages": [ { "role": "system", "content": "You are a helpful assistant" }, { "role": "user", "content": "What is the primary purpose?" }, { "role": "assistant", "content": "The primary purpose is..." } ] } Challenge When attempting evaluation, especially RAG evaluation, the documentation indicates that the dataset must contain structured fields such as question - The query being asked ground_truth - The expected answer Recommended additional fields reference_context metadata Example required format { "question": "", "ground_truth": "", "reference_context": "", "metadata": { "document": "" } } Because the synthetic dataset is in messages format, I am unable to directly map it to the required evaluation schema. Question Is there a recommended or supported way to convert the synthetic dataset generated in AI Foundry messages format into the structured format required for evaluation? Can the user role be mapped to question? Can the assistant role be mapped to ground_truth? Is there any built in transformation option within AI Foundry?109Views1like2CommentsTypo in Azure Foundry Learn
Hi Microsoft Foundry, I am not sure if this is the right place to post this, but I just wanted to report that there is a typo on this specific page : https://learn.microsoft.com/en-us/azure/foundry/openai/supported-languages?tabs=dotnet-secure%2Csecure%2Cpython-entra&pivots=programming-language-python Have a nice day.15Views0likes0Commentso3-deep-research is failed with the status incomplete with the reason as content filter
I working on an to do an deep research on internal data. I'm using currently the Azure OpenAI Responses API with MCP Tool. The underlying MCP server deployed into ACA with search and fetch tool with signatures in complaint with the specification (https://developers.openai.com/apps-sdk/build/mcp-server#company-knowledge-compatibility). OpenAI client created with 03-deep-research model with MCP tool, in a loop response status being checked. (https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/deep-research#remote-mcp-server-with-deep-research) Deep Research is being carried out for sometime, I could see in the log that handshake has been made, ListTools invoked, search tool is called post that fetch is called for the queries framed by the model.. But intermittently, the response status is becoming "incomplete" with incomplete reason as "content_filter". Otherwise the deep research is working fine. Not able identify the root cause as there is seems to be no way to identify what caused the content filtration whether its the prompt or completion. How to debug and check the root cause and rectify this ? Or is there known issue with the o3-deep-research model's intermediate reasoning completions Or search and fetch tool results are causing this ? I had uploaded a file made it available to MCP server, the search and fetch tool uses an Azure OpenAI agent to search the data using File Search and fetch tool gets the content of the file based on the id passed. For same file and same research topic the issue is not occurring always but intermittently.86Views0likes0CommentsCode Interpreter Container Failing (Timeout) on Create EastUS2
Hello, we've been using code interpreter reliably for over a year, but starting on 2/21/2026, containers created in our EastUS2 Foundry instance will intermittently fail. It is easy to reproduce this, by running the below cURL. Try to run it a few times, and it will succeed the first 1-4 times, and then you will hit the timeout. Sometimes the timeout occurs on the second create, sometimes on the 3rd, 4th or 5th. During the timeout, if any other requests to create a container are made, they also hang. This impacts all users if code interpreter is set to auto container creation, with the tool enabled, during normal chat. For now we've redeployed resources in US West and do not get the error but do not have quota on more advanced models there so need this resolved ASAP. curl -X POST "https://[redacted].cognitiveservices.azure.com/openai/v1/containers" \ -H "Content-Type: application/json" \ -H "api-key: [redacted]" \ -d '{ "name": "test-container-eastus2-repro", "expires_after": { "anchor": "last_active_at", "minutes": 20 } }'246Views0likes2CommentsError when creating Assistant in Microsoft Foundry using Fabric Data Agent
I am facing an issue when using a Microsoft Fabric Data Agent integrated with the new Microsoft Foundry, and I would like your assistance to investigate it. Scenario: 1. I created a Data Agent in Microsoft Fabric. 2. I connected this Data Agent as a Tool within a project in the new Microsoft Foundry. 3. I published the agent to Microsoft Teams and Copilot for Microsoft 365. 4. I configured the required Azure permissions, assigning the appropriate roles to the Foundry project Managed Identity (as shown in the attached evidence – Azure AI Developer and Azure AI User roles). Issue: When trying to use the published agent, I receive the following error: Response failed with code tool_user_error: Create assistant failed. If issue persists, please use following identifiers in any support request: ConversationId = PQbM0hGUvMF0X5EDA62v3-br activityId = PQbM0hGUvMF0X5EDA62v3-br|0000000 Additional notes: • Permissions appear to be correctly configured in Azure. • The error occurs during the assistant creation/execution phase via Foundry after publishing. • The same behavior occurs both in Teams and in Copilot for Microsoft 365. Could you please verify: • Whether there are any additional permissions required when using Fabric Data Agents as Tools in Foundry; • If there are any known limitations or specific requirements for publishing to Teams/Copilot M365; • And analyze the error identifiers provided above. I appreciate your support and look forward to your guidance on how to resolve this issue.SolvedFoundry Agent deployed to Copilot/Teams Can't Display Images Generated via Code Interpreter
Hello everyone, I’ve been developing an agent in the new Microsoft Foundry and enabled the Code Interpreter tool for it. In Agent Playground, I can successfully start a new chat and have the agent generate a chart/image using Code Interpreter. This works as expected in both the old and new Foundry experiences. However, after publishing the agent to Copilot/Teams for my organization, the same prompt that works in Agent Playground does not function properly. The agent appears to execute the code, but the image is not accessible in Teams. When reviewing the agent traces (via the Traces tab in Foundry), I can see that the agent generates a link to the image in the Code Interpreter sandbox environment, for example: `[Download the bar chart](sandbox:/mnt/data/bar_chart.png)` This works correctly within Foundry, but the sandbox path is not accessible from Teams, so the link fails there. Is there an officially supported way to surface Code Interpreter–generated files/images when the agent is deployed to Copilot/Teams, or is the recommended approach perhaps to implement a custom tool that uploads generated files to an external storage location (e.g., SharePoint, Blob Storage, or another file hosting service) and returns a publicly accessible link instead? I've been having trouble finding anything about this online. Any guidance would be greatly appreciated. Thank you!139Views0likes0CommentsNew Foundry Agent Issue
Hi all, I’m creating my first agent via New Foundry, so my questions are probably basic. As always, everything seemed straightforward… until deployment. I created an agent using gpt-4.1, added a list of instructions, and then used the Tools → Upload files functionality to attach a selection of reference documents. Everything worked perfectly in Preview mode. I then used the default option to Create a bot service, and it deployed successfully. To test it, I used the Individual Scope option (with the intention to share later with a couple of people — I haven’t worked that part out yet). Like magic, it appeared in my Teams and M365 Copilot, which was amazing… and then I ran my first search. It thought for a long time and then returned an error. In Co-pilot: and Teams: Nothing happens at all I’ve looked around for help but drawn a blank. I’m fairly sure it’s some kind of permissioning / access issue somewhere, but I can’t find where. Any help would be hugely appreciated.87Views0likes0CommentsIs there a way to connect 2 Ai foundry to the same cosmos containers?
I defined Azure AI Foundry Connection for Azure Cosmos DB and BYO Thread Storage in Azure AI Agent Service by using these instructions: Integration with Azure AI Agent Service - Azure Cosmos DB for NoSQL | Microsoft Learn I see that it created 3 containers under the cosmos I provided: <guid>-agent-entity-store v-system-thread-message-store <guid>-thread-message-store Now I created another AI foundry and added a connection for the same AI foundry, and it created 3 different containers under the same DB. Is there a way that they'll use the same exact containers? I want to use multiple AI foundries, and they will use the same Cosmos containers to manage the data.75Views0likes0CommentsPublishing New Foundry Agent to M365 and Teams (Org scope)
Hello all, I've been trying to publish a small agent from new Foundry to M365 and Teams following the official documentation but I am missing something. Please help! The creation part of the agent is easy and I get to the point where I want to publish this to users with an Org scope: At this point, I would need to deploy the agent in Microsoft 365 Admin Center (MAC) to users. However when I open MAC, there is nothing to validate! My new agent doesn't appear anywhere in M365 Copilot or teams, for me of for my users. What am I missing?? Do I need to do something in Entra as well? Thanks!291Views2likes4CommentsSearching for a simple guide to index SharePoint and publish an agent in Foundry
Hey all, Does anyone have a good guide or best practices for this setup in Foundry? SharePoint as data source GPT model (document + image indexing, ideally vectorized/embeddings) Create an Agent an Share the Agent Restrict access to Agent to specific users/groups only Looking for tutorials, examples, or real-world setups. Thanks!85Views0likes0Commentscosmos_vnet_blocked error with BYO standard agent setup
Hi! We've tried deploying the standard agent setup using terraform as described in the https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/virtual-networks?view=foundry-classic and using the terraform sample available https://github.com/azure-ai-foundry/foundry-samples/tree/main/infrastructure/infrastructure-setup-terraform/15a-private-network-standard-agent-setup/code as a basis to give the necessary support in our codebase. However we keep getting the following error: cosmos_vnet_blocked: Access to Cosmos DB is blocked due to VNET configuration. Please check your network settings and make sure CosmosDB is public network enabled, if this is a public standard agent setup. Has anyone experienced this error?599Views8likes7CommentsPublished agent from Foundry doesn't work at all in Teams and M365
I've switched to the new version of Azure AI Foundry (New) and created a project there. Within this project, I created an Agent and connected two custom MCP servers to it. The agent works correctly inside Foundry Playground and responds to all test queries as expected. My goal was to make this agent available for my organization in Microsoft Teams / Microsoft 365 Copilot, so I followed all the steps described in the official Microsoft documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Issue description The first problems started at Step 8 (Publishing the agent). Organization scope publishing I published the agent using Organization scope. The agent appeared in Microsoft Admin Center in the list of agents. However, when an administrator from my organization attempted to approve it, the approval always failed with a generic error: “Sorry, something went wrong” No diagnostic information, error codes, or logs were provided. We tried recreating and republishing the agent multiple times, but the result was always the same. Shared scope publishing As a workaround, I published the agent using Shared scope. In this case, the agent finally appeared in Microsoft Teams and Microsoft 365 Copilot. I can now see the agent here: Microsoft Teams → Copilot Microsoft Teams → Applications → Manage applications However, this revealed the main issue. Main problem The published agent cannot complete any query in Teams, despite the fact that: The agent works perfectly in Foundry Playground The agent responds correctly to the same prompts before publishing In Teams, every query results in messages such as: “Sorry, something went wrong. Try to complete a query later.” Simplification test To exclude MCP or instruction-related issues, I performed the following: Disabled all MCP tools Removed all complex instructions Left only a minimal system prompt: “When the user types 123, return 456” I then republished the agent. The agent appeared in Teams again, but the behavior did not change — it does not respond at all. Permissions warning in Teams When I go to: Teams → Applications → Manage Applications → My agent → View details I see a red warning label: “Permissions needed. Ask your IT admin to add InfoConnect Agent to this team/chat/meeting.” This message is confusing because: The administrator has already added all required permissions All relevant permissions were granted in Microsoft Entra ID Admin consent was provided Because of this warning, I also cannot properly share the agent with my colleagues. Additional observation I have a similar agent configured in Copilot Studio: It shows the same permissions warning However, that agent still responds correctly in Teams It can also successfully call some MCP tools This suggests that the issue is specific to Azure AI Foundry agents, not to Teams or tenant-wide permissions in general. Steps already taken to resolve the issue Configured all required RBAC roles in Azure Portal according to: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/rbac-foundry?view=foundry-classic During publishing, an agent-bot application was automatically created. I added my account to this bot with the Azure AI User role I also assigned Azure AI User to: The project’s Managed Identity The project resource itself Verified all permissions related to AI agents publishing in: Microsoft Admin Center Microsoft Teams Admin Center Simplified and republished the agent multiple times Deleted the automatically created agent-bot and allowed Foundry to recreate it Created a new Foundry project, configured several simple agents, and published them — the same issue occurs Tried publishing with different models: gpt-4.1, o4-mini Manually configured permissions in: Microsoft Entra ID → App registrations / Enterprise applications → API permissions Added both Delegated and Application permissions and granted Admin consent Added myself and my colleagues as Azure AI User in: Foundry → Project → Project users Followed all steps mentioned in this related discussion: https://techcommunity.microsoft.com/discussions/azure-ai-foundry-discussions/unable-to-publish-foundry-agent-to-m365-copilot-or-teams/4481420 Questions How can I make a Foundry agent work correctly in Microsoft Teams? Why does the agent fail to process requests in Teams while working correctly in Foundry? What does the “Permissions needed” warning actually mean for Foundry agents? How can I properly share the agent with other users in my organization? Any guidance, diagnostics, or clarification on the correct publishing and permission model for Foundry agents in Teams would be greatly appreciated.Solved1.2KViews1like4CommentsAI Model started to miss a delimiter in the invoice.
Hi, So we've been trying to use AI Builder to read standardized, invoices in PDF documents (not scans, no images, plain text). We have used both the invoice ready template and fixed template documents to train the model. We have used around 20 documents, with various scenarios, one-page of products, two pages. The pre-defined invoice model have correctly found the items lines in the invoice, and correctly read the values with delimiters in tables. It worked as magic for some time, but recently, the model started to miss the decimal delimiter (comma ",") but only in the >1 items invoiced. We had 2 collection for training. Single items and multi-items with >1. I think it could have something to do with published model used from 3.1 to 4.0, but can't confirm. Now it can read all the total values and so on correctly for all documents, but for multi-lines it can't read only the items-lines. Digits are read perfectly, but commas are missed. i.e it reads 58,58 as 5858. The delimiter in number field amount is set to comma, have tried changing to text but it's the same. Anyone had such issues, or any experience how can I maybe adjust collections to re-train the model, because i suppose this is the only way to fix it right now? Or how can I revert the model version to 3.1 to confirm it was after the update?179Views1like1CommentOpen AI model continuity plan for Standard Deployments in Australia East
Hi, I am working with an Azure customer in Australia on Agentic AI solutions. We have provisioned standard deployments of GPT-4o in Aus East due to the customer's need for data sovereignty. We have recently noticed in the customer's Azure AI Foundry that the standard deployment of GPT-4o in Aus East has a model retirement date of 3rd June 2026. This is the most advanced OpenAI model available for this deployment type. What is Azure's plan for Open AI model availability for standard deployments in Aus East going forward? Will our customer have access to 4o or a replacement model? Thanks317Views0likes1Comment
Events
Recent Blogs
- This week's Model Mondays edition spans three distinct layers of the AI application stack: Cohere's cohere-transcribe, a 2B Automatic Speech Recognition (ASR) model that ranks first on the Open ASR L...Apr 06, 2026122Views0likes0Comments
- From the Field: Why This Integration Works As an experienced AI Cloud Solution Architect working in Greater China Region (GCR), I’ve seen one emerging pattern that delivers quick wins for some of m...Apr 06, 2026189Views0likes0Comments