Forum Widgets
Latest Discussions
Get to know the core Foundry solutions
Foundry includes specialized services for vision, language, documents, and search, plus Microsoft Foundry for orchestration and governance. Here’s what each does and why it matters: Azure Vision With Azure Vision, you can detect common objects in images, generate captions, descriptions, and tags based on image contents, and read text in images. Example: Automate visual inspections or extract text from scanned documents. Azure Language Azure Language helps organizations understand and work with text at scale. It can identify key information, gauge sentiment, and create summaries from large volumes of content. It also supports building conversational experiences and question-answering tools, making it easier to deliver fast, accurate responses to customers and employees. Example: Understand customer feedback or translate text into multiple languages. Azure Document IntelligenceWith Azure Document Intelligence, you can use pre-built or custom models to extract fields from complex documents such as invoices, receipts, and forms. Example: Automate invoice processing or contract review. Azure SearchAzure Search helps you find the right information quickly by turning your content into a searchable index. It uses AI to understand and organize data, making it easier to retrieve relevant insights. This capability is often used to connect enterprise data with generative AI, ensuring responses are accurate and grounded in trusted information. Example: Help employees retrieve policies or product details without digging through files. Microsoft FoundryActs as the orchestration and governance layer for generative AI and AI agents. It provides tools for model selection, safety, observability, and lifecycle management. Example: Coordinate workflows that combine multiple AI capabilities with compliance and monitoring. Business leaders often ask: Which Foundry tool should I use? The answer depends on your workflow. For example: Are you trying to automate document-heavy processes like invoice handling or contract review? Do you need to improve customer engagement with multilingual support or sentiment analysis? Or are you looking to orchestrate generative AI across multiple processes for marketing or operations? Connecting these needs to the right Foundry solution ensures you invest in technology that delivers measurable results.Index data from SharePoint document libraries => Visioning / Image Analysis
Hi, I`m currently testing the indexing of SharePoint data according to the following instructions https://learn.microsoft.com/en-us/azure/search/search-how-to-index-sharepoint-online So far, so good. My question: Visioning on images is not enabled. Besides the Microsoft links, I found 2-3 other good links for the SharePoint indexer, but unfortunately none for Visioning / Image Analysis. Does anyone here have this working? Any tips or links on how to implement it? Many thanksnamor38Dec 03, 2025Copper Contributor29Views1like0CommentsTimeline for General Availability of SharePoint Data Source in Azure AI Search
The SharePoint data source feature in Azure AI Search is currently in preview. Could Microsoft or anyone here provide any guidance on the expected timeline for its General Availability (GA)? This functionality is essential for enabling seamless integration of enterprise content into AI-powered search solutions, and clarity on the roadmap will help organizations plan their adoption strategies effectively.Sam_KumarNov 28, 2025Brass Contributor16Views0likes0CommentsStructured Outputs fail with server_error when Bing Grounding is enabled in Azure AI Agents
Hi everyone, I’m running into a reproducible issue when using Structured Outputs (response_format: json_schema) together with Azure AI Agents that have the Bing Grounding tool enabled. The API always returns: "last_error": { "code": "server_error", "message": "Sorry, something went wrong." } The call returns HTTP 200, but the run fails immediately before the model generates any tokens (prompt_tokens = 0). Environment Azure AI Foundry (Sweden Central) Project: Azure AI Agents Model: gpt-4.1 (Standard DataZone) Agent with tool: bing_grounding (created from the UI) API version visible in logs: 2025-05-15-preview SDK: azure-ai-projects 1.2.0b6 azure-ai-agents 1.2.0b6 What I am Trying to Do I am attempting to enforce a JSON Schema output using: response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Simple structured output test", schema={ "type": "object", "properties": { "mensaje": {"type": "string"} }, "required": ["mensaje"], "additionalProperties": False } ) ) Then calling: run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent.id, response_format=response_format ) This same schema works successfully when the agent does NOT have Bing grounding enabled or when using the model outside of Agents. Observed Behavior The API request succeeds (HTTP 200), but the run immediately fails: Full run status: { "id": "run_XXXX", "status": "failed", "last_error": { "code": "server_error", "message": "Sorry, something went wrong." }, "model": "gpt-4.1-EU-LDZ", "tools": [ { "type": "bing_grounding", "bing_grounding": { "search_configurations": [ { "connection_id": "...", "market": "es-es", "set_lang": "es", "count": 5 } ] } } ], "response_format": { "type": "json_schema", "json_schema": { "name": "test_schema", "schema": { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"], "additionalProperties": false } } }, "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } Key points: prompt_tokens = 0 → The failure happens before the model receives the prompt. The same code works if: The agent has no tools Or I remove response_format The error is always the same: server_error. How to Reproduce Create an Azure AI Agent in AI Foundry. Add Bing Grounding to the agent. Set the model to gpt-4.1. Run the following minimal Python script: from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ResponseFormatJsonSchema, ResponseFormatJsonSchemaType from azure.identity import AzureCliCredential client = AIProjectClient( endpoint="YOUR_ENDPOINT", credential=AzureCliCredential() ) agent_id = "YOUR_AGENT_ID" schema = { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"] } response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Test schema", schema=schema ) ) thread = client.agents.threads.create() client.agents.messages.create( thread_id=thread.id, role="user", content="Say hello" ) run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent_id, response_format=response_format ) print(run.status, run.last_error) Result: status = failed, last_error = server_error. Expected Behavior Structured Outputs should work when the agent has tools enabled (including Bing grounding), or at least return a meaningful validation error instead of server_error. Question Is the combination Agents + Bing Grounding + Structured Outputs (json_schema) + gpt-4.1 currently supported? Is this a known limitation or bug? Is there a recommended workaround? I am happy to provide full request IDs (X-Request-ID and apim-request-id) privately via support channels if needed. Thanks!SergioSanchezEMAISNov 14, 2025Copper Contributor36Views0likes0CommentsOpen-Source SDK for Evaluating AI Model Outputs (Sharing Resource)
Hi everyone, I wanted to share a helpful open-source resource for developers working with LLMs, AI agents, or prompt-based applications. One common challenge in AI development is evaluating model outputs in a consistent and structured way. Manual evaluation can be subjective and time-consuming. The project below provides a framework to help with that: AI-Evaluation SDK https://github.com/future-agi/ai-evaluation Key Features: - Ready-to-use evaluation metrics - Supports text, image, and audio evaluation - Pre-defined prompt templates - Quickstart examples available in Python and TypeScript - Can integrate with workflows using toolkits like LangChain Use Case: If you are comparing different models or experimenting with prompt variations, this SDK helps standardize the evaluation process and reduces manual scoring effort. If anyone has experience with other evaluation tools or best practices, I’d be interested to hear what approaches you use.vihargadhesariyaNov 05, 2025Iron Contributor59Views0likes0CommentsOpen-Source SDK for Evaluating AI Model Outputs (Sharing Resource)
Hi everyone, I wanted to share a helpful open-source resource for developers working with LLMs, AI agents, or prompt-based applications. One common challenge in AI development is evaluating model outputs in a consistent and structured way. Manual evaluation can be subjective and time-consuming. The project below provides a framework to help with that: AI-Evaluation SDK https://github.com/future-agi/ai-evaluation Key Features: - Ready-to-use evaluation metrics - Supports text, image, and audio evaluation - Pre-defined prompt templates - Quickstart examples available in Python and TypeScript - Can integrate with workflows using toolkits like LangChain Use Case: If you are comparing different models or experimenting with prompt variations, this SDK helps standardize the evaluation process and reduces manual scoring effort. If anyone has experience with other evaluation tools or best practices, I’d be interested to hear what approaches you use27Views0likes0CommentsEstablish an Oracle Database Connection hosted on Azure VM via AI Foundry Agent
I have came across a requirement to create a AI Foundry agent that will accept requests from user like below: a. "I want to connect to abcprd database hosted on subscription sub1, and resource group rg1 and check the AWR report from xAM-yPM on a specific date (eg 21-Oct-2025) b. Check locking session/RMAN backup failures/active sessions from the database abcprd hosted on subscription sub1, and resource group rg1. The agent should be able to fetch the relevant query from knowledge base . connect to the database and run the report for the duration mentioned. It should then fetch the report and pass it to the LLM (GPT 4.1 in our case) for investigations. I am looking for approach to connect to the oracle database based on user's request and execute the query obtained from knowledge base.skandhwOct 24, 2025Occasional Reader70Views0likes0CommentsAzure AI foundry SDK-Tool Approval Not Triggered When Using ConnectedAgentTool() Between Agents
I am building an orchestration workflow in Azure AI Foundry using the Python SDK. Each agent uses tools exposed via an MCP server (deployed in Azure container app), and individual agents work perfectly when run independently — tool approval is triggered, and execution proceeds as expected. I have a main agent which orchestrates the flow of these individual agents.However, when I connect one agent to another using ConnectedAgentTool(), the tool approval flow never occurs, and orchestration halts. All I see is the run status as IN-PROGRESS for some time and then exits. The downstream (child) agent is never invoked. I have tried mcp_tool.set_approval_mode("never") , but that didn't help. Auto-Approval Implementation: I have implemented a polling loop that checks the run status and auto-approves any requires_action events. async def poll_run_until_complete(project_client: AIProjectClient, thread_id: str, run_id: str): """ Polls the run until completion. Auto-approves any tool calls encountered. """ while True: run = await project_client.agents.runs.get(thread_id=thread_id, run_id=run_id) status = getattr(run, "status", None) print(f"[poll] Run {run_id} status: {status}") # Completed states if status in ("succeeded", "failed", "cancelled", "completed"): print(f"[poll] Final run status: {status}") if status == "failed": print("Run last_error:", getattr(run, "last_error", None)) return run # Auto-approve any tool calls if status == "requires_action" and isinstance(getattr(run, "required_action", None), SubmitToolApprovalAction): submit_action = run.required_action.submit_tool_approval tool_calls = getattr(submit_action, "tool_calls", []) or [] if not tool_calls: print("[poll] requires_action but no tool_calls found. Waiting...") else: approvals = [] for tc in tool_calls: print(f"[poll] Auto-approving tool call: {tc.id} name={tc.name} args={tc.arguments}") approvals.append(ToolApproval(tool_call_id=tc.id, approve=True)) if approvals: await project_client.agents.runs.submit_tool_outputs( thread_id=thread_id, run_id=run_id, tool_approvals=approvals ) print("[poll] Submitted tool approvals.") else: # Debug: Inspect run steps if stuck run_steps = [s async for s in project_client.agents.run_steps.list(thread_id=thread_id, run_id=run_id)] if run_steps: for step in run_steps: sid = getattr(step, "id", None) sstatus = getattr(step, "status", None) print(f" step: id={sid} status={sstatus}") step_details = getattr(step, "step_details", None) if step_details: tool_calls = getattr(step_details, "tool_calls", None) if tool_calls: for call in tool_calls: print(f" tool_call id={getattr(call,'id',None)} name={getattr(call,'name',None)} args={getattr(call,'arguments',None)} output={getattr(call,'output',None)}") await asyncio.sleep(1) This code works and auto-approves tool calls for MCP tools. But while using ConnectedAgentTool(), the run never enters requires_action — so no approvals are requested, and the orchestration halts. Environment: azure-ai-agents==1.2.0b4 azure-ai-projects==1.1.0b4 Python: 3.11.13 Auth: DefaultAzureCredential Notes: MCP tools work and trigger approval normally when directly attached. and I ndividual agents function as expected in standalone runs. Can anyone help here..!reshmisreedharanOct 03, 2025Copper Contributor52Views0likes0CommentsIssue when connecting from SPFX to Entra-enabled Azure AI Foundry resource
We have been successfully connecting our chat bot from an SPFX to a chat completion model in Azure, using key authentication. We have a requirement now to disable key authentication. This is what we've done so far: disabled API authentication in the resource Gave to the SharePoint Client Extensibility Web Application Principal "Cognitive Services OpenAI User", "Cognitive Service User" and "Cognitive Data Reader" permission in the resource In the SPFX we have added the following in the package-solution.json (and we have approved it in the SharePoint admin site): "webApiPermissionRequests": [ { "resource": "Azure Machine Learning Services", "scope": "user_impersonation" } ] To connect to the chat completion API we're using fetchEventSource from '@microsoft/fetch-event-source', so we're getting a Bearer token using AadTokenProviderFactory from "@microsoft/sp-http", e.g.: // preceeded by some code to get the tokenProvider from aadTokenProviderFactory const token = await tokenProvider.getToken('https://ai.azure.com'); const url = "https://my-ai-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview"; await fetchEventSource(url, { method: 'POST', headers: { Accept: 'text/event-stream', 'Content-type': 'application/json', Authorization: `Bearer ${token}` }, body: body, ...// truncated We added the users (let's say, email address removed for privacy reasons) in the resource as an Azure AI User. When we try to get this to work, we get the following error: The principal `email address removed for privacy reasons` lacks the required data action `Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action` to perform `POST /openai/deployments/{deployment-id}/chat/completions` operation. How can we make this work? Ideally we would prefer the SPFX principal to do the request to the chat completion API, without needed to have to add end users in the resource thorugh IAC, but my understanding is that AadTokenProviderFactory only issues delegated access tokens.WFCSep 27, 2025Copper Contributor25Views0likes0CommentsResponses API for gpt-4.1 in Europe
Hello everyone! I'm writing here trying to figure out something about the availability of the "responses" APIs in european regions: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?tabs=python-key i'm trying to deploy a /responses endpoint for the model we are currently using, gpt-4.1, since i've read that the /completions endpoint will be dismissed by OpenAI starting from august 2026. Our application is currently migrating all the API calls from completions to responses, and we were wondering if we could already do the same for our clients in Europe as well, which have to comply to GDPR and currently use our Azure subscription. In the page linked above, i can see some regions that would fit our needs, in particular: francecentral norwayeast polandcentral swedencentral switzerlandnorth but then, i can also read "Not every model is available in the regions supported by the responses API.", which probably already answers my question: from the Azure AI Foundry Portal, i wasn't able to deploy such endpoint in those regions, except for the o3 model. For the 4.1 model, only the completions endpoint is listed, while searching for "Responses" (in "Deploy base model") returns only o3 and these others: Can you confirm that i'm not doing anything wrong (looking in the wrong place to deploy such endpoint), and currently the gpt-4.1 responses API is not available in any European region? Do you think it's realistic it will be soon (like in 2025)? Any european region would work for us, in the "DataZone-Standard" type of distribution, which already ensures GDPR compliance (no need for a "Standard" one in one specific region). Thank you for your attention, have a nice day or evening,Awhy_DeveloperSep 18, 2025Copper Contributor76Views0likes0Comments
Resources
Tags
- AMA74 Topics
- AI Platform56 Topics
- TTS50 Topics
- azure ai20 Topics
- azure ai foundry19 Topics
- azure ai services18 Topics
- azure machine learning13 Topics
- AzureAI11 Topics
- machine learning9 Topics
- azure8 Topics