Recent Discussions
grok4-fast-non-reasoning: persistent 503 errors?
I have two identical (afaict) deployments: grok-4-fast-reasoning, and grok-4-fast-non-reasoning. The first works, the second doesn't. Same result whether using the playground, curl, the python SDK, etc. Same result even if I try new deployments of grok-4-fast-non-reasoning. Is it down for everyone else, or just me? ❯ curl -X POST "https://my-deployment-name-us1.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview" -H "Content-Type: application/json" -H "Authorization: Bearer $AZURE_API_KEY" -d '{ "messages": [ { "role": "user", "content": "I am going to Paris, what should I see?" } ], "max_completion_tokens": 16000, "temperature": 1, "top_p": 1, "model": "grok-4-fast-non-reasoning" }' {"error":{"code":"Service Unavailable","message":"{\"code\":\"The service is currently unavailable\",\"error\":\"The model is temporarily unavailable.\"}","status":503}}62Views0likes2CommentsStructured Outputs fail with server_error when Bing Grounding is enabled in Azure AI Agents
Hi everyone, I’m running into a reproducible issue when using Structured Outputs (response_format: json_schema) together with Azure AI Agents that have the Bing Grounding tool enabled. The API always returns: "last_error": { "code": "server_error", "message": "Sorry, something went wrong." } The call returns HTTP 200, but the run fails immediately before the model generates any tokens (prompt_tokens = 0). Environment Azure AI Foundry (Sweden Central) Project: Azure AI Agents Model: gpt-4.1 (Standard DataZone) Agent with tool: bing_grounding (created from the UI) API version visible in logs: 2025-05-15-preview SDK: azure-ai-projects 1.2.0b6 azure-ai-agents 1.2.0b6 What I am Trying to Do I am attempting to enforce a JSON Schema output using: response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Simple structured output test", schema={ "type": "object", "properties": { "mensaje": {"type": "string"} }, "required": ["mensaje"], "additionalProperties": False } ) ) Then calling: run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent.id, response_format=response_format ) This same schema works successfully when the agent does NOT have Bing grounding enabled or when using the model outside of Agents. Observed Behavior The API request succeeds (HTTP 200), but the run immediately fails: Full run status: { "id": "run_XXXX", "status": "failed", "last_error": { "code": "server_error", "message": "Sorry, something went wrong." }, "model": "gpt-4.1-EU-LDZ", "tools": [ { "type": "bing_grounding", "bing_grounding": { "search_configurations": [ { "connection_id": "...", "market": "es-es", "set_lang": "es", "count": 5 } ] } } ], "response_format": { "type": "json_schema", "json_schema": { "name": "test_schema", "schema": { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"], "additionalProperties": false } } }, "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } Key points: prompt_tokens = 0 → The failure happens before the model receives the prompt. The same code works if: The agent has no tools Or I remove response_format The error is always the same: server_error. How to Reproduce Create an Azure AI Agent in AI Foundry. Add Bing Grounding to the agent. Set the model to gpt-4.1. Run the following minimal Python script: from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ResponseFormatJsonSchema, ResponseFormatJsonSchemaType from azure.identity import AzureCliCredential client = AIProjectClient( endpoint="YOUR_ENDPOINT", credential=AzureCliCredential() ) agent_id = "YOUR_AGENT_ID" schema = { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"] } response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Test schema", schema=schema ) ) thread = client.agents.threads.create() client.agents.messages.create( thread_id=thread.id, role="user", content="Say hello" ) run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent_id, response_format=response_format ) print(run.status, run.last_error) Result: status = failed, last_error = server_error. Expected Behavior Structured Outputs should work when the agent has tools enabled (including Bing grounding), or at least return a meaningful validation error instead of server_error. Question Is the combination Agents + Bing Grounding + Structured Outputs (json_schema) + gpt-4.1 currently supported? Is this a known limitation or bug? Is there a recommended workaround? I am happy to provide full request IDs (X-Request-ID and apim-request-id) privately via support channels if needed. Thanks!6Views0likes0CommentsStructured Outputs fail with server_error when Bing Grounding is enabled in Azure AI Agents
Hi everyone, I’m running into a reproducible issue when using Structured Outputs (response_format: json_schema) together with Azure AI Agents that have the Bing Grounding tool enabled. The API always returns: "last_error": { "code": "server_error", "message": "Sorry, something went wrong." } The call returns HTTP 200, but the run fails immediately before the model generates any tokens (prompt_tokens = 0). Environment Azure AI Foundry (Sweden Central) Project: Azure AI Agents Model: gpt-4.1-EU-LDZ Agent with tool: bing_grounding (created from the UI) API version visible in logs: 2025-05-15-preview SDK: azure-ai-projects 1.2.0b6 azure-ai-agents 1.2.0b6 What I am Trying to Do I am attempting to enforce a JSON Schema output using: response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Simple structured output test", schema={ "type": "object", "properties": { "mensaje": {"type": "string"} }, "required": ["mensaje"], "additionalProperties": False } ) ) Then calling: run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent.id, response_format=response_format ) This same schema works successfully when the agent does NOT have Bing grounding enabled or when using the model outside of Agents. Observed Behavior The API request succeeds (HTTP 200), but the run immediately fails: Full run status: { "id": "run_XXXX", "status": "failed", "last_error": { "code": "server_error", "message": "Sorry, something went wrong." }, "model": "gpt-4.1-EU-LDZ", "tools": [ { "type": "bing_grounding", "bing_grounding": { "search_configurations": [ { "connection_id": "...", "market": "es-es", "set_lang": "es", "count": 5 } ] } } ], "response_format": { "type": "json_schema", "json_schema": { "name": "test_schema", "schema": { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"], "additionalProperties": false } } }, "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } Key points: prompt_tokens = 0 → The failure happens before the model receives the prompt. The same code works if: The agent has no tools Or I remove response_format The error is always the same: server_error. How to Reproduce Create an Azure AI Agent in AI Foundry. Add Bing Grounding to the agent. Set the model to gpt-4.1-EU-LDZ. Run the following minimal Python script: from azure.ai.projects import AIProjectClient from azure.ai.agents.models import ResponseFormatJsonSchema, ResponseFormatJsonSchemaType from azure.identity import AzureCliCredential client = AIProjectClient( endpoint="YOUR_ENDPOINT", credential=AzureCliCredential() ) agent_id = "YOUR_AGENT_ID" schema = { "type": "object", "properties": {"mensaje": {"type": "string"}}, "required": ["mensaje"] } response_format = ResponseFormatJsonSchemaType( json_schema=ResponseFormatJsonSchema( name="test_schema", description="Test schema", schema=schema ) ) thread = client.agents.threads.create() client.agents.messages.create( thread_id=thread.id, role="user", content="Say hello" ) run = client.agents.runs.create_and_process( thread_id=thread.id, agent_id=agent_id, response_format=response_format ) print(run.status, run.last_error) Result: status = failed, last_error = server_error. Expected Behavior Structured Outputs should work when the agent has tools enabled (including Bing grounding), or at least return a meaningful validation error instead of server_error. Question Is the combination Agents + Bing Grounding + Structured Outputs (json_schema) + gpt-4.1-EU-LDZ currently supported? Is this a known limitation or bug? Is there a recommended workaround? I am happy to provide full request IDs (X-Request-ID and apim-request-id) privately via support channels if needed. Thanks!3Views0likes0CommentsAzure AI foundry projects
Hello, my use case: I need to be able to call my agent, which I created inside my azure ai foundry project. I have API route and also some API key, and the most crucial thing - agent id. Now, can someone explain to me, why all the documentation is telling me that I need some sort of authorization. I have already tried it and it is working. Now I am trying to think about something else. How to use this agent in some production ready apps? I am not able to create accounts for everybody who try to call my service. How this can be done ?28Views0likes1CommentBYO Thread Storage in Azure AI Foundry using Python
Build scalable, secure, and persistent multi-agent memory with your own storage backend As AI agents evolve beyond one-off interactions, persistent context becomes a critical architectural requirement. Azure AI Foundry’s latest update introduces a powerful capability — Bring Your Own (BYO) Thread Storage — enabling developers to integrate custom storage solutions for agent threads. This feature empowers enterprises to control how agent memory is stored, retrieved, and governed, aligning with compliance, scalability, and observability goals. What Is “BYO Thread Storage”? In Azure AI Foundry, a thread represents a conversation or task execution context for an AI agent. By default, thread state (messages, actions, results, metadata) is stored in Foundry’s managed storage. With BYO Thread Storage, you can now: Store threads in your own database — Azure Cosmos DB, SQL, Blob, or even a Vector DB. Apply custom retention, encryption, and access policies. Integrate with your existing data and governance frameworks. Enable cross-region disaster recovery (DR) setups seamlessly. This gives enterprises full control of data lifecycle management — a big step toward AI-first operational excellence. Architecture Overview A typical setup involves: Azure AI Foundry Agent Service — Hosts your multi-agent setup. Custom Thread Storage Backend — e.g., Azure Cosmos DB, Azure Table, or PostgreSQL. Thread Adapter — Python class implementing the Foundry storage interface. Disaster Recovery (DR) replication — Optional replication of threads to secondary region. Implementing BYO Thread Storage using Python Prerequisites First, install the necessary Python packages: pip install azure-ai-projects azure-cosmos azure-identity Setting Up the Storage Layer from azure.cosmos import CosmosClient, PartitionKey from azure.identity import DefaultAzureCredential import json from datetime import datetime class ThreadStorageManager: def __init__(self, cosmos_endpoint, database_name, container_name): credential = DefaultAzureCredential() self.client = CosmosClient(cosmos_endpoint, credential=credential) self.database = self.client.get_database_client(database_name) self.container = self.database.get_container_client(container_name) def create_thread(self, user_id, metadata=None): """Create a new conversation thread""" thread_id = f"thread_{user_id}_{datetime.utcnow().timestamp()}" thread_data = { 'id': thread_id, 'user_id': user_id, 'messages': [], 'created_at': datetime.utcnow().isoformat(), 'updated_at': datetime.utcnow().isoformat(), 'metadata': metadata or {} } self.container.create_item(body=thread_data) return thread_id def add_message(self, thread_id, role, content): """Add a message to an existing thread""" thread = self.container.read_item(item=thread_id, partition_key=thread_id) message = { 'role': role, 'content': content, 'timestamp': datetime.utcnow().isoformat() } thread['messages'].append(message) thread['updated_at'] = datetime.utcnow().isoformat() self.container.replace_item(item=thread_id, body=thread) return message def get_thread(self, thread_id): """Retrieve a complete thread""" try: return self.container.read_item(item=thread_id, partition_key=thread_id) except Exception as e: print(f"Thread not found: {e}") return None def get_thread_messages(self, thread_id): """Get all messages from a thread""" thread = self.get_thread(thread_id) return thread['messages'] if thread else [] def delete_thread(self, thread_id): """Delete a thread""" self.container.delete_item(item=thread_id, partition_key=thread_id) Integrating with Azure AI Foundry from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential class ConversationManager: def __init__(self, project_endpoint, storage_manager): self.ai_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=project_endpoint ) self.storage = storage_manager def start_conversation(self, user_id, system_prompt): """Initialize a new conversation""" thread_id = self.storage.create_thread( user_id=user_id, metadata={'system_prompt': system_prompt} ) # Add system message self.storage.add_message(thread_id, 'system', system_prompt) return thread_id def send_message(self, thread_id, user_message, model_deployment): """Send a message and get AI response""" # Store user message self.storage.add_message(thread_id, 'user', user_message) # Retrieve conversation history messages = self.storage.get_thread_messages(thread_id) # Call Azure AI with conversation history response = self.ai_client.inference.get_chat_completions( model=model_deployment, messages=[ {"role": msg['role'], "content": msg['content']} for msg in messages ] ) assistant_message = response.choices[0].message.content # Store assistant response self.storage.add_message(thread_id, 'assistant', assistant_message) return assistant_message Usage Example # Initialize storage and conversation manager storage = ThreadStorageManager( cosmos_endpoint="https://your-cosmos-account.documents.azure.com:443/", database_name="conversational-ai", container_name="threads" ) conversation_mgr = ConversationManager( project_endpoint="your-project-connection-string", storage_manager=storage ) # Start a new conversation thread_id = conversation_mgr.start_conversation( user_id="user123", system_prompt="You are a helpful AI assistant." ) # Send messages response1 = conversation_mgr.send_message( thread_id=thread_id, user_message="What is machine learning?", model_deployment="gpt-4" ) print(f"AI: {response1}") response2 = conversation_mgr.send_message( thread_id=thread_id, user_message="Can you give me an example?", model_deployment="gpt-4" ) print(f"AI: {response2}") # Retrieve full conversation history history = storage.get_thread_messages(thread_id) for msg in history: print(f"{msg['role']}: {msg['content']}") Key Highlights: Threads are stored in Cosmos DB under your control. You can attach metadata such as region, owner, or compliance tags. Integrates natively with existing Azure identity and Key Vault. Disaster Recovery & Resilience When coupled with geo-replicated Cosmos DB or Azure Storage RA-GRS, your BYO thread storage becomes resilient by design: Primary writes in East US replicate to Central US. Foundry auto-detects failover and reconnects to secondary region. Threads remain available during outages — ensuring operational continuity. This aligns perfectly with the AI-First Operational Excellence architecture theme, where reliability and observability drive intelligent automation. Best Practices Area Recommendation Security Use Azure Key Vault for credentials & encryption keys. Compliance Configure data residency & retention in your own DB. Observability Log thread CRUD operations to Azure Monitor or Application Insights. Performance Use async I/O and partition keys for large workloads. DR Enable geo-redundant storage & failover tests regularly. When to Use BYO Thread Storage Scenario Why it helps Regulated industries (BFSI, Healthcare, etc.) Maintain data control & audit trails Multi-region agent deployments Support DR and data sovereignty Advanced analytics on conversation data Query threads directly from your DB Enterprise observability Unified monitoring across Foundry + Ops The Future BYO Thread Storage opens doors to advanced use cases — federated agent memory, semantic retrieval over past conversations, and dynamic workload failover across regions. For architects, this feature is a key enabler for secure, scalable, and compliant AI system design. For developers, it means more flexibility, transparency, and integration power. Summary Feature Benefit Custom thread storage Full control over data Python adapter support Easy extensibility Multi-region DR ready Business continuity Azure-native security Enterprise-grade safety Conclusion Implementing BYO thread storage in Azure AI Foundry gives you the flexibility to build AI applications that meet your specific requirements for data governance, performance, and scalability. By taking control of your storage, you can create more robust, compliant, and maintainable AI solutions.Home decor platform development
My customer is developing a home decor platform where users can upload images of their rooms, apply different decor items like furniture or wall colors, and visualize how they would look. The platform relies on AI tools and APIs for object recognition and placement but is facing issues with accurately identifying the floor and objects. The AI is currently rejecting the floor or placing items incorrectly. They are also working on improving search results based on uploaded images, which show similar but not exact matches, leading to inconsistencies in user experience. They are facing challenges in achieving accuracy and reducing processing time.386Views1like1CommentSynchronous REST API for Language Text Summarization
This topic referenced this Language Text Summarization: https://learn.microsoft.com/en-us/azure/ai-services/language-service/summarization/how-to/text-summarization?source=recommendations Microsoft documentation on Language Text Summarization (Abstractive and Extractive) were for asynchronous REST API call. This is ideal for situation where we need to pass in files or long text for summarization. I need to implement solution where I need to call REST API synchronously for short text summarization. Is this even possible? If yes, please point me to the resource/documentation. Thanks, briancodey19Views0likes0CommentsAzure OpenAI Model Upgrades: Prompt Safety Pitfalls with GPT-4o and Beyond
Upgrading to New Azure OpenAI Models? Beware Your Old Prompts Might Break. I recently worked on upgrading our Azure OpenAI integration from gpt-35-turbo to gpt-4o-mini, expecting it to be a straightforward configuration change. Just update the Azure Foundry resource endpoint, change the model name, deploy the code — and voilà, everything should work as before. Right? Not quite. The Unexpected Roadblock As soon as I deployed the updated code, I started seeing 400 status errors from the OpenAI endpoint. The message was cryptic: The response was filtered due to the prompt triggering Azure OpenAI's content management policy. At first, I assumed it was a bug in my SDK call or a malformed payload. But after digging deeper, I realized this wasn’t a technical failure — it was a content safety filter kicking in before the prompt even reached the model. The Prompt That Broke It Here’s the original system prompt that worked perfectly with gpt-35-turbo: YOU ARE A QNA EXTRACTOR IN TEXT FORMAT. YOU WILL GET A SET OF SURVEYJS QNA JSONS. YOU WILL CONVERT THAT INTO A TEXT DOCUMENT. FOR THE QUESTIONS WHERE NO ANSWER WAS GIVEN, MARK THOSE AS NO ANSWER. HERE IS THE QNA: BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE A DOCUMENT TO BE PUBLISHED. {{$style}} +++++ {{$input}} +++++ This prompt had been reliable for months. But with gpt-4o-mini, it triggered Azure’s new input safety layer, introduced in mid-2024. What Changed with GPT-4o-mini? Unlike gpt-35-turbo, the gpt-4o family: Applies stricter content filtering — not just on the output, but also on the input prompt. Treats system messages and user messages as role-based chat messages, passing them through moderation before the model sees them. Flags prompts that look like prompt injection attempts like aggressive instructions like “YOU ARE…”, “BE CREATIVE”, “GENERATE”, “PROFESSIONAL”. Flags unusual formatting (like `+++++`), artificial delimiters or token markers as it may look like encoded content. In short, the model didn’t even get a chance to process my prompt — it was blocked at the gate. Fixing It: Softening the Prompt The solution wasn’t to rewrite the entire logic, but to soften the system prompt and remove formatting that could be misinterpreted. Here’s what helped: - Replacing “YOU ARE…” with a gentler instruction like “Please help convert the following Q&A data…” - Removing creative directives like “BE CREATIVE” or “PROFESSIONAL” unless clearly contextualized. - Avoiding raw JSON markers and template syntax (`{{ }}`, `+++++`) in the prompt. Once I made these changes, the model responded smoothly — and the upgrade was finally complete. Evolving the Prompt — Not Abandoning It Interestingly, for some prompts I didn’t have to completely eliminate the “YOU ARE…” structure. Instead, I refined it to be more natural and less directive. Here’s a comparison: ❌ Old Prompt (Blocked) ✅ New Prompt (Accepted) YOU ARE A SOURCING AND PROCUREMENT MANAGER. YOU WILL GET BUYER'S REQUIREMENTS IN QNA FORMAT. HERE IS THE QNA: {{$input}} +++++ YOU WILL GENERATE TOP 10 {{$category}} RELATED QUESTIONS THAT CAN BE ASKED OF A SUPPLIER IN JSON FORMAT. THE JSON MUST HAVE QUESTION NUMBER AS THE KEY AND QUESTION TEXT AS THE QUESTION. DON'T ADD ANY DESCRIPTION TEXT OR FORMATTING IN THE OUTPUT. BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE AN RFX. You are an AI assistant that helps clarify sourcing requirements. You will receive buyer's requirements in QnA format. Here is the QnA: {$input} Your task is to generate the top 10 {$category} related questions that can be asked of a supplier, in JSON format. - The JSON must use the question number as the key and the question text as the value. - Do not include any description text or formatting in the output. - Focus on creating clear, professional, and relevant questions that will help prepare an RFX. Key Takeaways - Model upgrades aren’t just about configuration changes — they can introduce new moderation layers that affect prompt design. - Prompt safety filtering is now a first-class citizen in Azure OpenAI, especially for newer models. - System prompts need to be rewritten with moderation in mind, not just clarity or creativity. This experience reminded me that even small upgrades can surface big learning moments. If you're planning to move to gpt-4o-mini or any newer Azure OpenAI model, take a moment to review your prompts — they might need a little more finesse than before.Open-Source SDK for Evaluating AI Model Outputs (Sharing Resource)
Hi everyone, I wanted to share a helpful open-source resource for developers working with LLMs, AI agents, or prompt-based applications. One common challenge in AI development is evaluating model outputs in a consistent and structured way. Manual evaluation can be subjective and time-consuming. The project below provides a framework to help with that: AI-Evaluation SDK https://github.com/future-agi/ai-evaluation Key Features: - Ready-to-use evaluation metrics - Supports text, image, and audio evaluation - Pre-defined prompt templates - Quickstart examples available in Python and TypeScript - Can integrate with workflows using toolkits like LangChain Use Case: If you are comparing different models or experimenting with prompt variations, this SDK helps standardize the evaluation process and reduces manual scoring effort. If anyone has experience with other evaluation tools or best practices, I’d be interested to hear what approaches you use.36Views0likes0CommentsOpen-Source SDK for Evaluating AI Model Outputs (Sharing Resource)
Hi everyone, I wanted to share a helpful open-source resource for developers working with LLMs, AI agents, or prompt-based applications. One common challenge in AI development is evaluating model outputs in a consistent and structured way. Manual evaluation can be subjective and time-consuming. The project below provides a framework to help with that: AI-Evaluation SDK https://github.com/future-agi/ai-evaluation Key Features: - Ready-to-use evaluation metrics - Supports text, image, and audio evaluation - Pre-defined prompt templates - Quickstart examples available in Python and TypeScript - Can integrate with workflows using toolkits like LangChain Use Case: If you are comparing different models or experimenting with prompt variations, this SDK helps standardize the evaluation process and reduces manual scoring effort. If anyone has experience with other evaluation tools or best practices, I’d be interested to hear what approaches you use11Views0likes0CommentsConnect AI Agent via postman
I'm having the hardest time trying to connect to my custom agent (Agent_id: asst_g8DVMGAOLiXXk7WmiTCMQBgj) via Postman. I'm able to authenticate fine, and receive the sequre token which I'm able to run my deployment post with no issues (https://aiagentoverview.cognitiveservices.azure.com/openai/deployments/gpt-4.1/chat/completions?api-version=2025-01-01-preview). But how do I run a post to my agent_id: asst_g8DVMGAOLiXXk7WmiTCMQBgj? I cant find any instructions anywhere.35Views0likes2CommentsLess models in ai foundry that supports agentic use
Hi, I have seen that nearly 11,000 models are available in Azure ai foundry, but when I try to deploy models that support Agents, only 18 models are available for selection. Is there any reason behind this ? Are we planning to support many models from external providers or rely on gpt models as first priority27Views0likes0Commentswhen will Prompt flow feature be available in foundry based projects
Hi, I see ai foundry project is being recommended for ai projects. But, prompt flow is not supported in foundry project and it is only supported in hub based projects. Is there any timeline for bringing "Prompt Flow" feature in Foundry based projects ? It might be difficult to switch between two types of project if for different functionalities.42Views0likes2CommentsReasoning Effort for Foundry Agents
I am currently using the Azure AI Foundry Agents API and noticed that unlike the base completions endpoint, there is no option to specify the "Reasoning Effort" parameter. Could you please confirm if this feature is supported in the Agents API? If not yet supported, are there any plans to introduce Reasoning Effort control for the Agents API in future releases?Solved45Views0likes1CommentEstablish an Oracle Database Connection hosted on Azure VM via AI Foundry Agent
I have came across a requirement to create a AI Foundry agent that will accept requests from user like below: a. "I want to connect to abcprd database hosted on subscription sub1, and resource group rg1 and check the AWR report from xAM-yPM on a specific date (eg 21-Oct-2025) b. Check locking session/RMAN backup failures/active sessions from the database abcprd hosted on subscription sub1, and resource group rg1. The agent should be able to fetch the relevant query from knowledge base . connect to the database and run the report for the duration mentioned. It should then fetch the report and pass it to the LLM (GPT 4.1 in our case) for investigations. I am looking for approach to connect to the oracle database based on user's request and execute the query obtained from knowledge base.55Views0likes0CommentsTrigger cant read fabric data agent
I make an agent in Azure AI Foundry. I use fabric data agent as a knowledge. Everything runs well until I try to use trigger to orchestrate my agent. I have added my trigger identity to fabric workspace where my fabric data agent and my lakehouse located. My trigger can work well and there is no error, but my agent cannot respond as if I do a prompt via the playground. Why?68Views0likes1CommentAzure AI foundry SDK-Tool Approval Not Triggered When Using ConnectedAgentTool() Between Agents
I am building an orchestration workflow in Azure AI Foundry using the Python SDK. Each agent uses tools exposed via an MCP server (deployed in Azure container app), and individual agents work perfectly when run independently — tool approval is triggered, and execution proceeds as expected. I have a main agent which orchestrates the flow of these individual agents.However, when I connect one agent to another using ConnectedAgentTool(), the tool approval flow never occurs, and orchestration halts. All I see is the run status as IN-PROGRESS for some time and then exits. The downstream (child) agent is never invoked. I have tried mcp_tool.set_approval_mode("never") , but that didn't help. Auto-Approval Implementation: I have implemented a polling loop that checks the run status and auto-approves any requires_action events. async def poll_run_until_complete(project_client: AIProjectClient, thread_id: str, run_id: str): """ Polls the run until completion. Auto-approves any tool calls encountered. """ while True: run = await project_client.agents.runs.get(thread_id=thread_id, run_id=run_id) status = getattr(run, "status", None) print(f"[poll] Run {run_id} status: {status}") # Completed states if status in ("succeeded", "failed", "cancelled", "completed"): print(f"[poll] Final run status: {status}") if status == "failed": print("Run last_error:", getattr(run, "last_error", None)) return run # Auto-approve any tool calls if status == "requires_action" and isinstance(getattr(run, "required_action", None), SubmitToolApprovalAction): submit_action = run.required_action.submit_tool_approval tool_calls = getattr(submit_action, "tool_calls", []) or [] if not tool_calls: print("[poll] requires_action but no tool_calls found. Waiting...") else: approvals = [] for tc in tool_calls: print(f"[poll] Auto-approving tool call: {tc.id} name={tc.name} args={tc.arguments}") approvals.append(ToolApproval(tool_call_id=tc.id, approve=True)) if approvals: await project_client.agents.runs.submit_tool_outputs( thread_id=thread_id, run_id=run_id, tool_approvals=approvals ) print("[poll] Submitted tool approvals.") else: # Debug: Inspect run steps if stuck run_steps = [s async for s in project_client.agents.run_steps.list(thread_id=thread_id, run_id=run_id)] if run_steps: for step in run_steps: sid = getattr(step, "id", None) sstatus = getattr(step, "status", None) print(f" step: id={sid} status={sstatus}") step_details = getattr(step, "step_details", None) if step_details: tool_calls = getattr(step_details, "tool_calls", None) if tool_calls: for call in tool_calls: print(f" tool_call id={getattr(call,'id',None)} name={getattr(call,'name',None)} args={getattr(call,'arguments',None)} output={getattr(call,'output',None)}") await asyncio.sleep(1) This code works and auto-approves tool calls for MCP tools. But while using ConnectedAgentTool(), the run never enters requires_action — so no approvals are requested, and the orchestration halts. Environment: azure-ai-agents==1.2.0b4 azure-ai-projects==1.1.0b4 Python: 3.11.13 Auth: DefaultAzureCredential Notes: MCP tools work and trigger approval normally when directly attached. and I ndividual agents function as expected in standalone runs. Can anyone help here..!40Views0likes0CommentsAI Foundry - Open API spec tool issue
Hello, I'm trying to invoke my application's API as a tool within the AI Foundry OpenAPI specification tool. However, I keep encountering a 401 Unauthorized error. I'm using a Bearer token for authentication, and it works perfectly when tested via Postman. I'm unsure whether the issue lies in the input/output schema or the connection configuration. Unfortunately, the AI Foundry Traces aren't providing enough detail to pinpoint the exact problem. Additionally, my API and AI Foundry accounts are hosted in different Azure subscriptions and networks. Could this network separation be affecting the connection? I would appreciate any guidance or help to resolve this issue. -Tamizh75Views0likes1CommentI can't delete my Azure Key Vault Connection in Azure AI Foundry
I have deleted all project under my Azure AI Foundry, but I still can't delete the Azure Key Vault Connection. Error: Azure Key Vault connection [Azure Key Vault Name] cannot be deleted, all credentials will be lost. Why is this happening?55Views0likes1CommentIssue when connecting from SPFX to Entra-enabled Azure AI Foundry resource
We have been successfully connecting our chat bot from an SPFX to a chat completion model in Azure, using key authentication. We have a requirement now to disable key authentication. This is what we've done so far: disabled API authentication in the resource Gave to the SharePoint Client Extensibility Web Application Principal "Cognitive Services OpenAI User", "Cognitive Service User" and "Cognitive Data Reader" permission in the resource In the SPFX we have added the following in the package-solution.json (and we have approved it in the SharePoint admin site): "webApiPermissionRequests": [ { "resource": "Azure Machine Learning Services", "scope": "user_impersonation" } ] To connect to the chat completion API we're using fetchEventSource from '@microsoft/fetch-event-source', so we're getting a Bearer token using AadTokenProviderFactory from "@microsoft/sp-http", e.g.: // preceeded by some code to get the tokenProvider from aadTokenProviderFactory const token = await tokenProvider.getToken('https://ai.azure.com'); const url = "https://my-ai-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview"; await fetchEventSource(url, { method: 'POST', headers: { Accept: 'text/event-stream', 'Content-type': 'application/json', Authorization: `Bearer ${token}` }, body: body, ...// truncated We added the users (let's say, email address removed for privacy reasons) in the resource as an Azure AI User. When we try to get this to work, we get the following error: The principal `email address removed for privacy reasons` lacks the required data action `Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action` to perform `POST /openai/deployments/{deployment-id}/chat/completions` operation. How can we make this work? Ideally we would prefer the SPFX principal to do the request to the chat completion API, without needed to have to add end users in the resource thorugh IAC, but my understanding is that AadTokenProviderFactory only issues delegated access tokens.16Views0likes0Comments
Events
Recent Blogs
- The pace of AI innovation is accelerating, and developers—across startups and global enterprises—are at the heart of this transformation. Today marks a significant moment for enterprise AI innovation...Nov 13, 20258.1KViews1like18Comments
- Dive into this curated event guide to make the most of your Microsoft Ignite. Get set for an unforgettable in-person and virtual experience!Nov 12, 2025435Views0likes0Comments