Forum Widgets
Latest Discussions
Error when creating Assistant in Microsoft Foundry using Fabric Data Agent
I am facing an issue when using a Microsoft Fabric Data Agent integrated with the new Microsoft Foundry, and I would like your assistance to investigate it. Scenario: 1. I created a Data Agent in Microsoft Fabric. 2. I connected this Data Agent as a Tool within a project in the new Microsoft Foundry. 3. I published the agent to Microsoft Teams and Copilot for Microsoft 365. 4. I configured the required Azure permissions, assigning the appropriate roles to the Foundry project Managed Identity (as shown in the attached evidence – Azure AI Developer and Azure AI User roles). Issue: When trying to use the published agent, I receive the following error: Response failed with code tool_user_error: Create assistant failed. If issue persists, please use following identifiers in any support request: ConversationId = PQbM0hGUvMF0X5EDA62v3-br activityId = PQbM0hGUvMF0X5EDA62v3-br|0000000 Additional notes: • Permissions appear to be correctly configured in Azure. • The error occurs during the assistant creation/execution phase via Foundry after publishing. • The same behavior occurs both in Teams and in Copilot for Microsoft 365. Could you please verify: • Whether there are any additional permissions required when using Fabric Data Agents as Tools in Foundry; • If there are any known limitations or specific requirements for publishing to Teams/Copilot M365; • And analyze the error identifiers provided above. I appreciate your support and look forward to your guidance on how to resolve this issue.Solved116Views0likes3CommentsUnable to publish Foundry agent to M365 copilot or Teams
I’m encountering an issue while publishing an agent in Microsoft Foundry to M365 Copilot or Teams. After creating the agent and Foundry resource, the process automatically created a Bot Service resource. However, I noticed that this resource has the same ID as the Application ID shown in the configuration. Is this expected behavior? If not, how should I resolve it? I followed the steps in the official documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-copilot?view=foundry Despite this, I keep getting the following error: There was a problem submitting the agent. Response status code does not indicate success: 401 (Unauthorized). Status Code: 401 Any guidance on what might be causing this and how to fix it would be greatly appreciated.Solved223Views0likes3CommentsReasoning Effort for Foundry Agents
I am currently using the Azure AI Foundry Agents API and noticed that unlike the base completions endpoint, there is no option to specify the "Reasoning Effort" parameter. Could you please confirm if this feature is supported in the Agents API? If not yet supported, are there any plans to introduce Reasoning Effort control for the Agents API in future releases?SolvedSamLS42Oct 28, 2025Copper Contributor94Views0likes1CommentModel Training Data Last Updated Date
Hi. Hoping someone can shed some light on something for me regarding published training data dates. I just deployed a gpt-4.1-nano model. Within the Azure AI Foundry when you are in the Model Catalog area and select the gpt-4.1-nano model it shows that the training data was last updated May 2024. However, after deploying the model (version:2025-04-14) and going into the chat playground and using that model when it was trained to it gives me a response of October 2023. Can someone help me with the discrepancy here? Any I misunderstanding something? I want to make sure we know how current the data if we use a particular model within our application. Thanks. AlanSolvedAlan WhitehouseJul 03, 2025Copper Contributor570Views0likes4CommentsIntroducing Azure AI Models: The Practical, Hands-On Course for Real Azure AI Skills
Hello everyone, Today, I’m excited to share something close to my heart. After watching so many developers, including myself—get lost in a maze of scattered docs and endless tutorials, I knew there had to be a better way to learn Azure AI. So, I decided to build a guide from scratch, with a goal to break things down step by step—making it easy for beginners to get started with Azure, My aim was to remove the guesswork and create a resource where anyone could jump in, follow along, and actually see results without feeling overwhelmed. Introducing Azure AI Models Guide. This is a brand new, solo-built, open-source repo aimed at making Azure AI accessible for everyone—whether you’re just getting started or want to build real, production-ready apps using Microsoft’s latest AI tools. The idea is simple: bring all the essentials into one place. You’ll find clear lessons, hands-on projects, and sample code in Python, JavaScript, C#, and REST—all structured so you can learn step by step, at your own pace. I wanted this to be the resource I wish I’d had when I started: straightforward, practical, and friendly to beginners and pros alike. It’s early days for the project, but I’m excited to see it grow. If you’re curious.. Check out the repo at https://github.com/DrHazemAli/Azure-AI-Models Your feedback—and maybe even your contributions—will help shape where it goes next!Solved950Views1like5CommentsAzure ML Studio - Attached Compute Challenges
Hello community, I'm new to ML services and have been exploring the ML Studio the last while to understand it better from an infrastructure point of view. I understand that I should be able to attach an existing VM (Ubuntu) running in my Azure environment, and use this as a compute resource in the ML Studio. I've come across two challenges, and I would appreciate your help. I'm sure perhaps I am just missing something small. Firstly, I would like to connect to my virtual machine over a private endpoint. What I have tried is to create the private endpoint to my VM following the online guidance (https://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link?view=azureml-api-2&tabs=azure-portal). Both the VM and the endpoints are on the same subnet on the same vNet, yet, it is unable to attach the compute. It seems to still default to the public IP of the VM, which is not what I am after. I have the SSH port configured to port 22 still, and I have tried several options on my NSG to configure the source and destination information (Service Tags, IP address, etc.), but with no luck. Am I missing something? Is attaching an existing VM for compute over a private endpoint a supported configuration, or does the private endpoint only support compute created out of the ML Studio compute section? Secondly, if I forget about the private endpoint and attach the VM directly over internet (not desired, obviously), it is not presented to me as a compute option when I try to run my Jupyter Notebook. I only have "Azure Machine Learning Serverless Spark" as a compute option, or any compute that was indeed created through the ML Studio. I don't have the option to select the existing VM that was attached from Azure. Again, is there a fundamental step or limitation that I am overlooking? Thanks in advanceSolvedSebastiaanRMay 28, 2025Brass Contributor349Views0likes3CommentsUnderstanding Azure OpenAI Service Provisioned Reservations
Hello Team, We are building a Azure OpenAI based finetuned model making use of GPT 4o-mini for long run. Wanted to understand the costing, here we came up with the following question over Azure OpenAI Service Provisioned Reservations plan PTU units. Need to understand how it works: Is there any Token Quota Limit Provisioned finetuned model deployment? How many finetuned model with Provisioned capacity can be deployed under the plan, How will the pricing affect if we deploy multiple finetune model? Model Deployment - GPT 4o-mini finetuned Region - North Central US We are doing it for our enterprise customer, kindly help us to resolve this issue.SolvedsachinsMar 28, 2025Copper Contributor845Views1like6CommentsQuery
Hello everyone, I have started my BTech in data science and AI, and my ultimate goal is to be an AI engineer can anyone please advise me on the path to achieving it?Solvedakhil_007Dec 31, 2024Copper Contributor164Views0likes1CommentTalking to your relational Database using GPT not just to one table or view, but to multiple v/t
Dear Community, What is the best solution for a chat app that can interact with a relational database, not just limited to a single table or view? Thank you!Solvedsgoswami3Aug 28, 2024Copper Contributor415Views0likes1Comment
Resources
Tags
- AMA74 Topics
- AI Platform56 Topics
- TTS50 Topics
- azure ai21 Topics
- azure ai foundry21 Topics
- azure ai services18 Topics
- azure machine learning13 Topics
- AzureAI11 Topics
- azure10 Topics
- machine learning9 Topics