Forum Widgets
Latest Discussions
Published agent from Foundry doesn't work at all in Teams and M365
I've switched to the new version of Azure AI Foundry (New) and created a project there. Within this project, I created an Agent and connected two custom MCP servers to it. The agent works correctly inside Foundry Playground and responds to all test queries as expected. My goal was to make this agent available for my organization in Microsoft Teams / Microsoft 365 Copilot, so I followed all the steps described in the official Microsoft documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Issue description The first problems started at Step 8 (Publishing the agent). Organization scope publishing I published the agent using Organization scope. The agent appeared in Microsoft Admin Center in the list of agents. However, when an administrator from my organization attempted to approve it, the approval always failed with a generic error: “Sorry, something went wrong” No diagnostic information, error codes, or logs were provided. We tried recreating and republishing the agent multiple times, but the result was always the same. Shared scope publishing As a workaround, I published the agent using Shared scope. In this case, the agent finally appeared in Microsoft Teams and Microsoft 365 Copilot. I can now see the agent here: Microsoft Teams → Copilot Microsoft Teams → Applications → Manage applications However, this revealed the main issue. Main problem The published agent cannot complete any query in Teams, despite the fact that: The agent works perfectly in Foundry Playground The agent responds correctly to the same prompts before publishing In Teams, every query results in messages such as: “Sorry, something went wrong. Try to complete a query later.” Simplification test To exclude MCP or instruction-related issues, I performed the following: Disabled all MCP tools Removed all complex instructions Left only a minimal system prompt: “When the user types 123, return 456” I then republished the agent. The agent appeared in Teams again, but the behavior did not change — it does not respond at all. Permissions warning in Teams When I go to: Teams → Applications → Manage Applications → My agent → View details I see a red warning label: “Permissions needed. Ask your IT admin to add InfoConnect Agent to this team/chat/meeting.” This message is confusing because: The administrator has already added all required permissions All relevant permissions were granted in Microsoft Entra ID Admin consent was provided Because of this warning, I also cannot properly share the agent with my colleagues. Additional observation I have a similar agent configured in Copilot Studio: It shows the same permissions warning However, that agent still responds correctly in Teams It can also successfully call some MCP tools This suggests that the issue is specific to Azure AI Foundry agents, not to Teams or tenant-wide permissions in general. Steps already taken to resolve the issue Configured all required RBAC roles in Azure Portal according to: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/rbac-foundry?view=foundry-classic During publishing, an agent-bot application was automatically created. I added my account to this bot with the Azure AI User role I also assigned Azure AI User to: The project’s Managed Identity The project resource itself Verified all permissions related to AI agents publishing in: Microsoft Admin Center Microsoft Teams Admin Center Simplified and republished the agent multiple times Deleted the automatically created agent-bot and allowed Foundry to recreate it Created a new Foundry project, configured several simple agents, and published them — the same issue occurs Tried publishing with different models: gpt-4.1, o4-mini Manually configured permissions in: Microsoft Entra ID → App registrations / Enterprise applications → API permissions Added both Delegated and Application permissions and granted Admin consent Added myself and my colleagues as Azure AI User in: Foundry → Project → Project users Followed all steps mentioned in this related discussion: https://techcommunity.microsoft.com/discussions/azure-ai-foundry-discussions/unable-to-publish-foundry-agent-to-m365-copilot-or-teams/4481420 Questions How can I make a Foundry agent work correctly in Microsoft Teams? Why does the agent fail to process requests in Teams while working correctly in Foundry? What does the “Permissions needed” warning actually mean for Foundry agents? How can I properly share the agent with other users in my organization? Any guidance, diagnostics, or clarification on the correct publishing and permission model for Foundry agents in Teams would be greatly appreciated.AlexeyPrudnikovJan 13, 2026Copper Contributor90Views0likes0CommentsGet to know the core Foundry solutions
Foundry includes specialized services for vision, language, documents, and search, plus Microsoft Foundry for orchestration and governance. Here’s what each does and why it matters: Azure Vision With Azure Vision, you can detect common objects in images, generate captions, descriptions, and tags based on image contents, and read text in images. Example: Automate visual inspections or extract text from scanned documents. Azure Language Azure Language helps organizations understand and work with text at scale. It can identify key information, gauge sentiment, and create summaries from large volumes of content. It also supports building conversational experiences and question-answering tools, making it easier to deliver fast, accurate responses to customers and employees. Example: Understand customer feedback or translate text into multiple languages. Azure Document IntelligenceWith Azure Document Intelligence, you can use pre-built or custom models to extract fields from complex documents such as invoices, receipts, and forms. Example: Automate invoice processing or contract review. Azure SearchAzure Search helps you find the right information quickly by turning your content into a searchable index. It uses AI to understand and organize data, making it easier to retrieve relevant insights. This capability is often used to connect enterprise data with generative AI, ensuring responses are accurate and grounded in trusted information. Example: Help employees retrieve policies or product details without digging through files. Microsoft FoundryActs as the orchestration and governance layer for generative AI and AI agents. It provides tools for model selection, safety, observability, and lifecycle management. Example: Coordinate workflows that combine multiple AI capabilities with compliance and monitoring. Business leaders often ask: Which Foundry tool should I use? The answer depends on your workflow. For example: Are you trying to automate document-heavy processes like invoice handling or contract review? Do you need to improve customer engagement with multilingual support or sentiment analysis? Or are you looking to orchestrate generative AI across multiple processes for marketing or operations? Connecting these needs to the right Foundry solution ensures you invest in technology that delivers measurable results.Open-Source SDK for Evaluating AI Model Outputs (Sharing Resource)
Hi everyone, I wanted to share a helpful open-source resource for developers working with LLMs, AI agents, or prompt-based applications. One common challenge in AI development is evaluating model outputs in a consistent and structured way. Manual evaluation can be subjective and time-consuming. The project below provides a framework to help with that: AI-Evaluation SDK https://github.com/future-agi/ai-evaluation Key Features: - Ready-to-use evaluation metrics - Supports text, image, and audio evaluation - Pre-defined prompt templates - Quickstart examples available in Python and TypeScript - Can integrate with workflows using toolkits like LangChain Use Case: If you are comparing different models or experimenting with prompt variations, this SDK helps standardize the evaluation process and reduces manual scoring effort. If anyone has experience with other evaluation tools or best practices, I’d be interested to hear what approaches you use.vihargadhesariyaNov 05, 2025Iron Contributor65Views0likes0CommentsEstablish an Oracle Database Connection hosted on Azure VM via AI Foundry Agent
I have came across a requirement to create a AI Foundry agent that will accept requests from user like below: a. "I want to connect to abcprd database hosted on subscription sub1, and resource group rg1 and check the AWR report from xAM-yPM on a specific date (eg 21-Oct-2025) b. Check locking session/RMAN backup failures/active sessions from the database abcprd hosted on subscription sub1, and resource group rg1. The agent should be able to fetch the relevant query from knowledge base . connect to the database and run the report for the duration mentioned. It should then fetch the report and pass it to the LLM (GPT 4.1 in our case) for investigations. I am looking for approach to connect to the oracle database based on user's request and execute the query obtained from knowledge base.skandhwOct 24, 2025Occasional Reader74Views0likes0CommentsAzure AI foundry SDK-Tool Approval Not Triggered When Using ConnectedAgentTool() Between Agents
I am building an orchestration workflow in Azure AI Foundry using the Python SDK. Each agent uses tools exposed via an MCP server (deployed in Azure container app), and individual agents work perfectly when run independently — tool approval is triggered, and execution proceeds as expected. I have a main agent which orchestrates the flow of these individual agents.However, when I connect one agent to another using ConnectedAgentTool(), the tool approval flow never occurs, and orchestration halts. All I see is the run status as IN-PROGRESS for some time and then exits. The downstream (child) agent is never invoked. I have tried mcp_tool.set_approval_mode("never") , but that didn't help. Auto-Approval Implementation: I have implemented a polling loop that checks the run status and auto-approves any requires_action events. async def poll_run_until_complete(project_client: AIProjectClient, thread_id: str, run_id: str): """ Polls the run until completion. Auto-approves any tool calls encountered. """ while True: run = await project_client.agents.runs.get(thread_id=thread_id, run_id=run_id) status = getattr(run, "status", None) print(f"[poll] Run {run_id} status: {status}") # Completed states if status in ("succeeded", "failed", "cancelled", "completed"): print(f"[poll] Final run status: {status}") if status == "failed": print("Run last_error:", getattr(run, "last_error", None)) return run # Auto-approve any tool calls if status == "requires_action" and isinstance(getattr(run, "required_action", None), SubmitToolApprovalAction): submit_action = run.required_action.submit_tool_approval tool_calls = getattr(submit_action, "tool_calls", []) or [] if not tool_calls: print("[poll] requires_action but no tool_calls found. Waiting...") else: approvals = [] for tc in tool_calls: print(f"[poll] Auto-approving tool call: {tc.id} name={tc.name} args={tc.arguments}") approvals.append(ToolApproval(tool_call_id=tc.id, approve=True)) if approvals: await project_client.agents.runs.submit_tool_outputs( thread_id=thread_id, run_id=run_id, tool_approvals=approvals ) print("[poll] Submitted tool approvals.") else: # Debug: Inspect run steps if stuck run_steps = [s async for s in project_client.agents.run_steps.list(thread_id=thread_id, run_id=run_id)] if run_steps: for step in run_steps: sid = getattr(step, "id", None) sstatus = getattr(step, "status", None) print(f" step: id={sid} status={sstatus}") step_details = getattr(step, "step_details", None) if step_details: tool_calls = getattr(step_details, "tool_calls", None) if tool_calls: for call in tool_calls: print(f" tool_call id={getattr(call,'id',None)} name={getattr(call,'name',None)} args={getattr(call,'arguments',None)} output={getattr(call,'output',None)}") await asyncio.sleep(1) This code works and auto-approves tool calls for MCP tools. But while using ConnectedAgentTool(), the run never enters requires_action — so no approvals are requested, and the orchestration halts. Environment: azure-ai-agents==1.2.0b4 azure-ai-projects==1.1.0b4 Python: 3.11.13 Auth: DefaultAzureCredential Notes: MCP tools work and trigger approval normally when directly attached. and I ndividual agents function as expected in standalone runs. Can anyone help here..!reshmisreedharanOct 03, 2025Copper Contributor61Views0likes0CommentsIssue when connecting from SPFX to Entra-enabled Azure AI Foundry resource
We have been successfully connecting our chat bot from an SPFX to a chat completion model in Azure, using key authentication. We have a requirement now to disable key authentication. This is what we've done so far: disabled API authentication in the resource Gave to the SharePoint Client Extensibility Web Application Principal "Cognitive Services OpenAI User", "Cognitive Service User" and "Cognitive Data Reader" permission in the resource In the SPFX we have added the following in the package-solution.json (and we have approved it in the SharePoint admin site): "webApiPermissionRequests": [ { "resource": "Azure Machine Learning Services", "scope": "user_impersonation" } ] To connect to the chat completion API we're using fetchEventSource from '@microsoft/fetch-event-source', so we're getting a Bearer token using AadTokenProviderFactory from "@microsoft/sp-http", e.g.: // preceeded by some code to get the tokenProvider from aadTokenProviderFactory const token = await tokenProvider.getToken('https://ai.azure.com'); const url = "https://my-ai-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview"; await fetchEventSource(url, { method: 'POST', headers: { Accept: 'text/event-stream', 'Content-type': 'application/json', Authorization: `Bearer ${token}` }, body: body, ...// truncated We added the users (let's say, email address removed for privacy reasons) in the resource as an Azure AI User. When we try to get this to work, we get the following error: The principal `email address removed for privacy reasons` lacks the required data action `Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action` to perform `POST /openai/deployments/{deployment-id}/chat/completions` operation. How can we make this work? Ideally we would prefer the SPFX principal to do the request to the chat completion API, without needed to have to add end users in the resource thorugh IAC, but my understanding is that AadTokenProviderFactory only issues delegated access tokens.WFCSep 27, 2025Copper Contributor32Views0likes0CommentsResponses API for gpt-4.1 in Europe
Hello everyone! I'm writing here trying to figure out something about the availability of the "responses" APIs in european regions: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?tabs=python-key i'm trying to deploy a /responses endpoint for the model we are currently using, gpt-4.1, since i've read that the /completions endpoint will be dismissed by OpenAI starting from august 2026. Our application is currently migrating all the API calls from completions to responses, and we were wondering if we could already do the same for our clients in Europe as well, which have to comply to GDPR and currently use our Azure subscription. In the page linked above, i can see some regions that would fit our needs, in particular: francecentral norwayeast polandcentral swedencentral switzerlandnorth but then, i can also read "Not every model is available in the regions supported by the responses API.", which probably already answers my question: from the Azure AI Foundry Portal, i wasn't able to deploy such endpoint in those regions, except for the o3 model. For the 4.1 model, only the completions endpoint is listed, while searching for "Responses" (in "Deploy base model") returns only o3 and these others: Can you confirm that i'm not doing anything wrong (looking in the wrong place to deploy such endpoint), and currently the gpt-4.1 responses API is not available in any European region? Do you think it's realistic it will be soon (like in 2025)? Any european region would work for us, in the "DataZone-Standard" type of distribution, which already ensures GDPR compliance (no need for a "Standard" one in one specific region). Thank you for your attention, have a nice day or evening,Awhy_DeveloperSep 18, 2025Copper Contributor89Views0likes0CommentsChaining and Streaming with Responses API in Azure
Responses API is an enhancement of the existing Chat Completions API. It is stateful and supports agentic capabilities. As a superset of the Chat Completions class, it continues to support functionality of chat completions. In addition, reasoning models, like GPT-5 result in better model intelligence when compared to Chat Completions. It has input flexibility, supporting a range of input types. It is currently available in the following regions on Azure and can be used with all the models available in the region. The API supports response streaming, chaining and also function calling. In the examples below, we use the gpt-5-nano model for a simple response, a chained response and streaming responses. To get started update the installed openai library. pip install --upgrade openai Simple Message 1) Build the client with the following code from openai import OpenAI client = OpenAI( base_url=endpoint, api_key=api_key, ) 2) The response received is an id which can then be used to retrieve the message. # Non-streaming request resp_id = client.responses.create( model=deployment, input=messages, ) 3) Message is retrieved using the response id from previous step response = client.responses.retrieve(resp_id.id) Chaining For a chained message, an extra step is sharing the context. This is done by sending the response id in the subsequent requests. resp_id = client.responses.create( model=deployment, previous_response_id=resp_id.id, input=[{"role": "user", "content": "Explain this at a level that could be understood by a college freshman"}] ) Streaming A different function call is used for streaming queries. client.responses.stream( model=deployment, input=messages, # structured messages ) In addition, the streaming query response has to be handled appropriately till end of event stream for event in s: # Accumulate only text deltas for clean output if event.type == "response.output_text.delta": delta = event.delta or "" text_out.append(delta) # Echo streaming output to console as it arrives print(delta, end="", flush=True) The code is available in the following github link - https://github.com/arunacarunac/ResponsesAPI Additional details are available in the following links - Azure OpenAI Responses API - Azure OpenAI | Microsoft Learn157Views0likes0CommentsDo you have experience fine tuning GPS OSS models?
Hi I found this space called Affine. It is a daily reinforcement learning competition and I'm participating in it. One thing that I am looking for collaboration on is with fine tuning GPT OSS models to score well on the evaluations. I am wondering if anyone here is interested in mining? I feel that people here would have some good reinforcement learning tricks. These models are evaluated on a set of RL-environments with validators looking for the model which dominates the Pareto frontier. I'm specifically looking to see any improvements in the coding deduction environment and the new ELR environment they made. I would like to use a GPT OSS model here but its hard to fine-tune these models in GRPO. Here is the information I found on Affine: https://www.reddit.com/r/reinforcementlearning/comments/1mnq6i0/comment/n86sjrk/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_buttonIntiiAug 25, 2025Copper Contributor63Views0likes0CommentsImage Dataset in Azure AI Asking for Tabular Format During Training
Hi everyone, I’m working on an image-based project in Azure AI. My images (PNG) are stored in Azure Blob Storage, and I registered them as a folder in Data Assets. When I start training, the UI asks for a tabular dataset instead. Since my data is images, I’m unsure how to proceed or whether I need to convert or register the dataset differently. What’s the correct way to set up image data for training in Azure AI?BhavithaTirumala03Aug 12, 2025Copper Contributor60Views0likes0Comments
Resources
Tags
- AMA74 Topics
- AI Platform56 Topics
- TTS50 Topics
- azure ai21 Topics
- azure ai foundry21 Topics
- azure ai services18 Topics
- azure machine learning13 Topics
- AzureAI11 Topics
- azure10 Topics
- machine learning9 Topics