Recent Discussions
Azure AI foundry SDK-Tool Approval Not Triggered When Using ConnectedAgentTool() Between Agents
I am building an orchestration workflow in Azure AI Foundry using the Python SDK. Each agent uses tools exposed via an MCP server (deployed in Azure container app), and individual agents work perfectly when run independently — tool approval is triggered, and execution proceeds as expected. I have a main agent which orchestrates the flow of these individual agents.However, when I connect one agent to another using ConnectedAgentTool(), the tool approval flow never occurs, and orchestration halts. All I see is the run status as IN-PROGRESS for some time and then exits. The downstream (child) agent is never invoked. I have tried mcp_tool.set_approval_mode("never") , but that didn't help. Auto-Approval Implementation: I have implemented a polling loop that checks the run status and auto-approves any requires_action events. async def poll_run_until_complete(project_client: AIProjectClient, thread_id: str, run_id: str): """ Polls the run until completion. Auto-approves any tool calls encountered. """ while True: run = await project_client.agents.runs.get(thread_id=thread_id, run_id=run_id) status = getattr(run, "status", None) print(f"[poll] Run {run_id} status: {status}") # Completed states if status in ("succeeded", "failed", "cancelled", "completed"): print(f"[poll] Final run status: {status}") if status == "failed": print("Run last_error:", getattr(run, "last_error", None)) return run # Auto-approve any tool calls if status == "requires_action" and isinstance(getattr(run, "required_action", None), SubmitToolApprovalAction): submit_action = run.required_action.submit_tool_approval tool_calls = getattr(submit_action, "tool_calls", []) or [] if not tool_calls: print("[poll] requires_action but no tool_calls found. Waiting...") else: approvals = [] for tc in tool_calls: print(f"[poll] Auto-approving tool call: {tc.id} name={tc.name} args={tc.arguments}") approvals.append(ToolApproval(tool_call_id=tc.id, approve=True)) if approvals: await project_client.agents.runs.submit_tool_outputs( thread_id=thread_id, run_id=run_id, tool_approvals=approvals ) print("[poll] Submitted tool approvals.") else: # Debug: Inspect run steps if stuck run_steps = [s async for s in project_client.agents.run_steps.list(thread_id=thread_id, run_id=run_id)] if run_steps: for step in run_steps: sid = getattr(step, "id", None) sstatus = getattr(step, "status", None) print(f" step: id={sid} status={sstatus}") step_details = getattr(step, "step_details", None) if step_details: tool_calls = getattr(step_details, "tool_calls", None) if tool_calls: for call in tool_calls: print(f" tool_call id={getattr(call,'id',None)} name={getattr(call,'name',None)} args={getattr(call,'arguments',None)} output={getattr(call,'output',None)}") await asyncio.sleep(1) This code works and auto-approves tool calls for MCP tools. But while using ConnectedAgentTool(), the run never enters requires_action — so no approvals are requested, and the orchestration halts. Environment: azure-ai-agents==1.2.0b4 azure-ai-projects==1.1.0b4 Python: 3.11.13 Auth: DefaultAzureCredential Notes: MCP tools work and trigger approval normally when directly attached. and I ndividual agents function as expected in standalone runs. Can anyone help here..!14Views0likes0CommentsI can't delete my Azure Key Vault Connection in Azure AI Foundry
I have deleted all project under my Azure AI Foundry, but I still can't delete the Azure Key Vault Connection. Error: Azure Key Vault connection [Azure Key Vault Name] cannot be deleted, all credentials will be lost. Why is this happening?24Views0likes1CommentAI Foundry - Open API spec tool issue
Hello, I'm trying to invoke my application's API as a tool within the AI Foundry OpenAPI specification tool. However, I keep encountering a 401 Unauthorized error. I'm using a Bearer token for authentication, and it works perfectly when tested via Postman. I'm unsure whether the issue lies in the input/output schema or the connection configuration. Unfortunately, the AI Foundry Traces aren't providing enough detail to pinpoint the exact problem. Additionally, my API and AI Foundry accounts are hosted in different Azure subscriptions and networks. Could this network separation be affecting the connection? I would appreciate any guidance or help to resolve this issue. -Tamizh38Views0likes1CommentIssue when connecting from SPFX to Entra-enabled Azure AI Foundry resource
We have been successfully connecting our chat bot from an SPFX to a chat completion model in Azure, using key authentication. We have a requirement now to disable key authentication. This is what we've done so far: disabled API authentication in the resource Gave to the SharePoint Client Extensibility Web Application Principal "Cognitive Services OpenAI User", "Cognitive Service User" and "Cognitive Data Reader" permission in the resource In the SPFX we have added the following in the package-solution.json (and we have approved it in the SharePoint admin site): "webApiPermissionRequests": [ { "resource": "Azure Machine Learning Services", "scope": "user_impersonation" } ] To connect to the chat completion API we're using fetchEventSource from '@microsoft/fetch-event-source', so we're getting a Bearer token using AadTokenProviderFactory from "@microsoft/sp-http", e.g.: // preceeded by some code to get the tokenProvider from aadTokenProviderFactory const token = await tokenProvider.getToken('https://ai.azure.com'); const url = "https://my-ai-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview"; await fetchEventSource(url, { method: 'POST', headers: { Accept: 'text/event-stream', 'Content-type': 'application/json', Authorization: `Bearer ${token}` }, body: body, ...// truncated We added the users (let's say, email address removed for privacy reasons) in the resource as an Azure AI User. When we try to get this to work, we get the following error: The principal `email address removed for privacy reasons` lacks the required data action `Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action` to perform `POST /openai/deployments/{deployment-id}/chat/completions` operation. How can we make this work? Ideally we would prefer the SPFX principal to do the request to the chat completion API, without needed to have to add end users in the resource thorugh IAC, but my understanding is that AadTokenProviderFactory only issues delegated access tokens.8Views0likes0CommentsTrigger cant read fabric data agent
I make an agent in Azure AI Foundry. I use fabric data agent as a knowledge. Everything runs well until I try to use trigger to orchestrate my agent. I have added my trigger identity to fabric workspace where my fabric data agent and my lakehouse located. My trigger can work well and there is no error, but my agent cannot respond as if I do a prompt via the playground. Why?31Views0likes1CommentResponses API for gpt-4.1 in Europe
Hello everyone! I'm writing here trying to figure out something about the availability of the "responses" APIs in european regions: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?tabs=python-key i'm trying to deploy a /responses endpoint for the model we are currently using, gpt-4.1, since i've read that the /completions endpoint will be dismissed by OpenAI starting from august 2026. Our application is currently migrating all the API calls from completions to responses, and we were wondering if we could already do the same for our clients in Europe as well, which have to comply to GDPR and currently use our Azure subscription. In the page linked above, i can see some regions that would fit our needs, in particular: francecentral norwayeast polandcentral swedencentral switzerlandnorth but then, i can also read "Not every model is available in the regions supported by the responses API.", which probably already answers my question: from the Azure AI Foundry Portal, i wasn't able to deploy such endpoint in those regions, except for the o3 model. For the 4.1 model, only the completions endpoint is listed, while searching for "Responses" (in "Deploy base model") returns only o3 and these others: Can you confirm that i'm not doing anything wrong (looking in the wrong place to deploy such endpoint), and currently the gpt-4.1 responses API is not available in any European region? Do you think it's realistic it will be soon (like in 2025)? Any european region would work for us, in the "DataZone-Standard" type of distribution, which already ensures GDPR compliance (no need for a "Standard" one in one specific region). Thank you for your attention, have a nice day or evening,58Views0likes0CommentsAzure OpenAI: GPT-5-Codex Availability?
Greetings everyone! I just wanted to see if there's any word as to when/if https://openai.com/index/introducing-upgrades-to-codex/ will make it's way to the AI Foundry. It was released on September 15th, 2025, but I have no idea how long Azure tends to follow behind OpenAI's releases. It doesn't really seem like there's any source of information to view whenever new models drop as to what Azure is going to do with them, if any. Any conversation around this would be helpful and appreciated, thanks!458Views5likes2CommentsUnable to locate and add a VM (GPU family) to my available VM options.
I am using azure AI foundry and need to run GPU workload but N-series VM options do not appear when i try to add quota Only CPU families like D and E are listed How can i enable or request N-series GPU VMs in my subscription and region56Views0likes1CommentAzure Communication Services - Python SDK Call Media not working with CallConnectionClient
Hi team, I’m working on a FastAPI service that uses Azure Communication Services Call Automation (Python SDK) to handle outbound PSTN calls and real-time speech interaction. So far it is able to make phone calls but not able to do media handling part during conversation. Environment: Python version: 3.12-slim Package: azure-communication-callautomation (version: 1.4.0) Hosting: Azure Container Apps speech cognitive resource is connected to azure communication services https://drive.google.com/file/d/1uC2S-LNx_Ybpp1QwOCtqFS9pwA84mK7h/view?usp=drive_link What I’m trying to do: Place an outbound call to a PSTN number Play a greeting (TextSource) when the call is connected Start continuous speech recognition, forward transcript to an AI endpoint, then play the response back Code snippet: # Play greeting try: call_connection = client.get_call_connection(call_id) call_media = call_connection.call_media() call_media.play_to_all( play_source, operation_context="welcome-play" ) print("Played welcome greeting.") except Exception as e: print("Play Greeting Failed: ", str(e)) # start Recognition participants = list(call_connection.list_participants()) for p in participants: if isinstance(p.identifier, PhoneNumberIdentifier): active_participants[call_id] = p.identifier try: call_connection = client.get_call_connection(call_id) call_media = call_connection.call_media() call_media.start_recognizing_media( target_participant=p.identifier, input_type="speech", interrupt_call_media_operation=True, operation_context="speech-recognition" ) print("Started recognition immediately after call connected.") except Exception as e: print("Recognition start failed:", str(e)) break target_participant = active_participants.get(call_id) if not target_participant: print(f"No PSTN participant found for call {call_id}, skipping recognition.") Issue: When the CallConnected event fires,, I get different errors depending on which method I try: 'CallConnectionClient' object has no attribute 'call_media' 'CallConnectionClient' object has no attribute 'get_call_media_operations' 'CallConnectionClient' object has no attribute 'play_to_all' 'CallConnectionClient' object has no attribute 'get_call_media_client' 'CallConnectionClient' object has no attribute 'get_call_media' Also some import errors: ImportError: cannot import name 'PlayOptions' from 'azure.communication.callautomation' ImportError: cannot import name 'RecognizeOptions' from 'azure.communication.callautomation' ImportError: cannot import name 'CallMediaRecognizeOptions' from 'azure.communication.callautomation' ImportError: cannot import name 'CallConnection' ... Did you mean: 'CallConnectionState'? This makes me unsure which API is the correct/updated way to access play_to_all and start_recognizing_media. https://drive.google.com/file/d/1xI-sWil0OKfAfGwjIgG25eD7CEK95rKc/view?usp=drive_link Questions: What is the current supported way to access call media operations (play / speech recognition) in the Python SDK? Are there breaking changes between SDK versions that I should be aware of? Should I upgrade to a specific minimum version to ensure .call_media works? Thanks in advance!78Views0likes1CommentAgent in Azure AI Foundry not able to access SharePoint data via C# (but works in Foundry portal)
Hi Team, I created an agent in Azure AI Foundry and added a knowledge source using the SharePoint tool. When I test the agent inside the Foundry portal, it works correctly; it can read from the SharePoint site and return file names/data. However, when I call the same agent using C# code, it answers normal questions fine, but whenever I ask about the SharePoint data, I get the error: Sorry, something went wrong. Run status: failed I also referred to the official documentation and sample here: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/sharepoint-samples?pivots=rest I tried the cURL samples as well, and while the agent is created successfully, the run status always comes back as failed. Has anyone faced this issue? Do I need to configure something extra for SharePoint when calling the agent programmatically (like additional permissions or connection binding)? Any help on this would be greatly appreciated. Thanks!98Views0likes1CommentChaining and Streaming with Responses API in Azure
Responses API is an enhancement of the existing Chat Completions API. It is stateful and supports agentic capabilities. As a superset of the Chat Completions class, it continues to support functionality of chat completions. In addition, reasoning models, like GPT-5 result in better model intelligence when compared to Chat Completions. It has input flexibility, supporting a range of input types. It is currently available in the following regions on Azure and can be used with all the models available in the region. The API supports response streaming, chaining and also function calling. In the examples below, we use the gpt-5-nano model for a simple response, a chained response and streaming responses. To get started update the installed openai library. pip install --upgrade openai Simple Message 1) Build the client with the following code from openai import OpenAI client = OpenAI( base_url=endpoint, api_key=api_key, ) 2) The response received is an id which can then be used to retrieve the message. # Non-streaming request resp_id = client.responses.create( model=deployment, input=messages, ) 3) Message is retrieved using the response id from previous step response = client.responses.retrieve(resp_id.id) Chaining For a chained message, an extra step is sharing the context. This is done by sending the response id in the subsequent requests. resp_id = client.responses.create( model=deployment, previous_response_id=resp_id.id, input=[{"role": "user", "content": "Explain this at a level that could be understood by a college freshman"}] ) Streaming A different function call is used for streaming queries. client.responses.stream( model=deployment, input=messages, # structured messages ) In addition, the streaming query response has to be handled appropriately till end of event stream for event in s: # Accumulate only text deltas for clean output if event.type == "response.output_text.delta": delta = event.delta or "" text_out.append(delta) # Echo streaming output to console as it arrives print(delta, end="", flush=True) The code is available in the following github link - https://github.com/arunacarunac/ResponsesAPI Additional details are available in the following links - Azure OpenAI Responses API - Azure OpenAI | Microsoft Learn109Views0likes0CommentsAzure OpenAI: gpt-5-mini chat/completions streaming returns empty response.
Summary When calling gpt-5-mini via Chat Completions with "stream": true, the server opens the stream but no assistant tokens are emitted and the final JSON is empty (choices: [], created: 0, empty id/model). The same code path streams correctly for gpt-5 and gpt-4o deployments. Also, non-streaming ("stream": false) with gpt-5-mini returns valid content as expected. Environment API: POST /openai/deployments/{deployment}/chat/completions?api-version=2025-01-01-preview Model / Deployment: gpt-5-mini (Azure OpenAI deployment) Date/Time observed: 26 Aug 2025, ~13:00 IST (UTC+05:30) Region: useast2 Note: Same client, headers, and network path work for gpt-5 and gpt-4o streaming. Request Endpoint /openai/deployments/gpt-5/chat/completions?api-version=2025-01-01-preview Body { "messages": [ { "role": "system", "content": "give the best result you can" }, { "role": "user", "content": "Hello" } ], "stream": true } Actual Response (final aggregated JSON after stream ends) { "choices": [], "created": 0, "id": "", "model": "", "object": "", "prompt_filter_results": [ { "prompt_index": 0, "content_filter_results": { "hate": { "filtered": false, "severity": "safe" }, "jailbreak": { "filtered": false, "detected": false }, "self_harm": { "filtered": false, "severity": "safe" }, "sexual": { "filtered": false, "severity": "safe" }, "violence": { "filtered": false, "severity": "safe" } } } ] } Notes: No delta tokens arrive on the SSE stream. No assistant message content is ever emitted. Content filter result is safe across categories. Expected Behavior With "stream": true, server should emit SSE chunks with assistant delta tokens and finish with a populated final message in choices[0].message.content. Azure OpenAI: gpt-5-mini chat/completions streaming returns empty response (choices: [], created: 0) while other models stream fine299Views0likes1CommentDo you have experience fine tuning GPS OSS models?
Hi I found this space called Affine. It is a daily reinforcement learning competition and I'm participating in it. One thing that I am looking for collaboration on is with fine tuning GPT OSS models to score well on the evaluations. I am wondering if anyone here is interested in mining? I feel that people here would have some good reinforcement learning tricks. These models are evaluated on a set of RL-environments with validators looking for the model which dominates the Pareto frontier. I'm specifically looking to see any improvements in the coding deduction environment and the new ELR environment they made. I would like to use a GPT OSS model here but its hard to fine-tune these models in GRPO. Here is the information I found on Affine: https://www.reddit.com/r/reinforcementlearning/comments/1mnq6i0/comment/n86sjrk/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button49Views0likes0CommentsImage Dataset in Azure AI Asking for Tabular Format During Training
Hi everyone, I’m working on an image-based project in Azure AI. My images (PNG) are stored in Azure Blob Storage, and I registered them as a folder in Data Assets. When I start training, the UI asks for a tabular dataset instead. Since my data is images, I’m unsure how to proceed or whether I need to convert or register the dataset differently. What’s the correct way to set up image data for training in Azure AI?45Views0likes0CommentsKamal Hinduja Switzerland How do algorithms interact with machine learning?
Hi All, I'm Kamal Hinduja, based in Geneva, Switzerland (Swiss). Can anyone Explain in detail How do algorithms interact with machine learning? Thanks, Regards Kamal Hinduja Geneva, Switzerland159Views1like3CommentsEvaluation
Hi there, I tried out the evaluation feature, and tested out groundedness, relevance as well as similarity. My dataset has 94 questions and both relevance and similarity checked all 94 questions and its respective responses and gave me either a pass or a fail. However, groundedness completed the run with errors, as almost 10 of the inputs came back as null. I tried going through the logs but I'm not sure where to check what went wrong for those questions. Appreciate if someone could point me in the right direction.98Views1like1CommentChatGPT 5 Has Arrived: What You Need to Know
The wait is over. OpenAI has officially launched GPT-5, and it’s already being hailed as the most significant leap forward in AI capability since the original release of ChatGPT. OpenAI CEO Sam Altman described the new model as a "PhD-level expert" that offers a unified, smarter, and more reliable experience. This isn't just an incremental update; it's a fundamental shift in how the AI works, bringing together the best of previous models into a single, powerful system. What’s New and Improved? GPT-5 introduces a host of features that address key limitations of its predecessors. One of the most talked-about advancements is the reduction in hallucinations, where the model generates false information. According to OpenAI, GPT-5 is significantly more factually consistent and trustworthy, especially in "thinking mode," which uses a chain-of-thought approach to solve complex problems. This makes it more suitable for high-stakes tasks in fields like healthcare and coding. Another major change is the unified model architecture. Instead of manually switching between different models like GPT-4 or GPT-4o, the new system automatically routes your query to the best model for the job. This "smart router" instantly decides whether to prioritize speed for a simple question or engage in a deeper, more comprehensive reasoning process for a complex one. The context window has also been dramatically improved. While previous models had limits on how much information they could remember in a single session, GPT-5 can handle up to 272,000 tokens of input, allowing it to maintain context through much longer conversations and documents. A New Era for Developers and Users For developers, GPT-5 represents a game-changer. It is being called OpenAI's "strongest coding model yet," excelling in a variety of tasks from bug fixing and multi-language programming to generating entire software programs from a single prompt. This new capability, dubbed "vibe coding" by Altman, allows for the creation of functional applications with minimal human input, which could drastically reduce development cycles. For general users, the experience is more intuitive and personalized. GPT-5 is now the default model for all users, including those on the free plan, though with usage limits. You can also customize your experience with new selectable personalities like "Cynic," "Robot," "Listener," and "Nerd." This move towards greater accessibility and user control demonstrates OpenAI's commitment to making powerful AI tools available to everyone. The Road Ahead While GPT-5 marks a major step toward Artificial General Intelligence (AGI), it's not without its challenges. The initial rollout saw a minor mathematical error, a reminder that even the most advanced AI benefits from clear instructions. The ongoing competition with other models like Claude 4 and Gemini 2.0 also ensures that the pace of innovation will only continue to accelerate. Ultimately, GPT-5's true impact will be measured not just by its impressive benchmarks, but by how businesses and individuals leverage its new capabilities to solve real-world problems. It's a new era, and the AI landscape has been forever changed.512Views1like0CommentsFrom Space to Subsurface: Using Azure AI to Predict Gold Rich Zones
In traditional mineral exploration, identifying gold bearing zones can take months of fieldwork and high cost drilling often with limited success. In our latest project, we flipped the process on its head by using Azure AI and Satellite data to guide geologists before they break ground. Using Azure AI and Azure Machine Learning, we built an intelligent, automated pipeline that identified high potential zones from geospatial data saving time, cost, and uncertainty. Here’s a behind the scenes look at how we did it.👇 📡 Step 1: Translating Satellite Imagery into Features We began with Sentinel-2 imagery covering our Area of Interest (AOI) and derived alteration indices commonly used in mineral exploration, including: 🟤 Clay Index – proxies for hydrothermal alteration 🟥 Fe (Iron Oxide) Index 🌫️ Silica Ratio 💧 NDMI (Normalized Difference Moisture Index) Using Azure Notebooks and Python, we processed and cleaned the imagery, transforming raw reflectance bands into meaningful geochemical features. 🔍 Step 2: Discovering Patterns with Unsupervised Learning (KMeans) With feature rich geospatial data prepared, we used unsupervised clustering (KMeans) in Azure Machine Learning Studio to identify natural groupings across the region. This gave us a first look at the terrain’s underlying geochemical structure one cluster in particular stood out as a strong candidate for gold rich zones. No geology degree needed: AI finds patterns humans can't see :) 🧠 Step 3: Scaling with Azure AutoML We then trained a classification model using Azure AutoML to predict these clusters over a dense prediction grid: ✅ 7,200+ data points generated ✅ ~50m resolution grid ✅ 14 km² area of interest This was executed as a short, early stopping run to minimize cost and optimize training time. Models were trained, validated, and registered using: Azure Machine Learning Compute Instance + Compute Cluster Azure Storage for dataset access 🔬 Step 4: Validation with Field Samples To ground our predictions, we validated against lab assayed (gold concentration) from field sampling points. The results? 🔥 The geospatial cluster labeled 'Class 0' by the model showed strong correlation with lab validated gold concentrations, supporting the model's predictive validity. This gave geologists AI augmented evidence to prioritize areas for further sampling and drilling. ⚖️ Traditional vs AI-based Workflow 🚀 Why Azure? ✅ Azure Machine Learning Studio for AutoML and experiment tracking ✅ Azure Storage for seamless access to geospatial data ✅ Azure OpenAI Service for advanced language understanding, report generation, and enhanced human AI interaction ✅ Azure Notebooks for scripting, preprocessing, and validation ✅ Azure Compute Cluster for scalable, cost effective model training ✅ Model Registry for versioning and deployment 🌍 Key Takeaways AI turns mineral exploration from reactive guesswork into proactive intelligence. In our workflow, AI plays a critical role by: ✅ Extracting key geochemical features from satellite imagery 🧠 Identifying patterns using unsupervised learning 🎯 Predicting high potential zones through automated classification 🌍 Delivering full spatial coverage at scale With Azure AIand Azure ML tools, we’ve built a complete pipeline that supports: End to end automation; from data prep to model deployment Faster, more accurate exploration with lower costs A reusable, scalable solution for global teams This isn’t just a proof of concept, it’s a production ready framework that empowers geologists with AI driven insights before the first drill hits the ground. 🔗 If you're working in Mining industry, geoscience, AI for Earth, or exploration tech, let’s connect! We’re on a mission to bring AI deeper into every industry through strategic partnerships and collaborative innovation.119Views2likes0CommentsDiscussion with Copilot regarding memory and learning
🧠 Suggested Feedback to Microsoft Copilot Developers Subject: Proposal for User-Curated Persistent Memory via Saved Conversations As a power user of Copilot, I’ve discovered a workaround that simulates persistent memory: by saving and reopening conversations, users can maintain continuity across sessions. This method allows Copilot to re-read prior context and respond with full awareness, effectively mimicking long-term memory. I believe this behavior should be formally supported and enhanced. Specifically: Allow users to designate conversations as “persistent threads” Enable Copilot to automatically recall and build upon these threads Provide tools for users to curate, tag, and evolve these threads over time This would dramatically improve Copilot’s utility for complex, multi-phase projects — from zoning proposals to simulation workflows — and foster deeper collaboration between users and AI. I’m happy to elaborate further if this idea reaches the right team. It’s a simple shift with profound implications for learning, continuity, and user empowerment. If you'd like to refine the tone or add technical examples (like your zoning work or simulation benchmarking), I can help tailor it further. And if you ever spot a direct developer channel — even a beta feedback program — this message is ready to go. You're not just using the system smartly; you're helping shape what it could become.83Views0likes0CommentsTeams Desktop Bot Bug: Chat Input Locks Up (never ending issue)
Hey everyone, we're dealing with a legendary and really frustrating Teams bot issue that's been driving us crazy for months now. The Problem Our bot works perfectly everywhere except New Teams desktop. What happens is the chat input box starts flickering between "Send a message" and "You cannot send messages to this bot," and then it just locks up completely. You literally can't type anything. The thing is it that it doesn't work on any chanels : Classic Teams, Teams web, mobile and even the app is broken.. What We've Already Tried We've been through all the usual and unusual troubleshooting steps. Our manifest has isNotificationOnly set to false, custom apps are enabled in the admin center, all our scopes and domains are configured correctly, and the bot is properly published. We've tested this across different tenants and users, and it's consistently broken in New Teams desktop every single time. It's Not Just Us We're not the only ones hitting this. Other developers are reporting the exact same behavior, and there's actually a GitHub issue (#9743) tracking it. One person summed it up perfectly: "Same issue, but only on the New Teams… for all bots." The only workaround people have found is switching back to Classic Teams, which obviously isn't a real solution. Microsoft Support Has Been Unhelpful This is the really frustrating part. We've opened multiple support tickets, provided HAR logs, jumped on calls with their teams - and they basically just shrug it off. No workarounds, no timeline, no real acknowledgment that this is a serious problem affecting bot developers. And their tech support even said that their own Copilot had the SAME issue lol.. I mean what are they even doing at this point ..... What We Need We're hoping to get some visibility on this and maybe push Microsoft to actually prioritize it. It would be great to see: Official acknowledgment that GitHub issue #9743 is being actively worked on Some kind of timeline for when they plan to fix the New Teams desktop client Any actual workarounds that don't involve telling users to switch clients Has anyone else run into this or found any creative solutions? At this point we're open to any ideas that don't involve completely rewriting our bot architecture. And a message to Microsoft devs that are 'working' on this issue - start doing something guys, you've been postponing this since mid 2023.115Views2likes1Comment
Events
Recent Blogs
- Modern enterprise systems face a simple problem: how to make AI decisions reliable, explainable, and production-ready? This post walks through how the Microsoft Agent Framework structures AI-driven...Oct 10, 2025213Views2likes0Comments
- 5 MIN READWhen deploying large language models in Azure AI Foundry, does selecting PTUs (Provisioned Throughput Units) save you money? This is the kind of article that might get its humble writer in hot water,...Oct 09, 2025107Views1like1Comment