azure ai
18 TopicsAzure OpenAI Model Upgrades: Prompt Safety Pitfalls with GPT-4o and Beyond
Upgrading to New Azure OpenAI Models? Beware Your Old Prompts Might Break. I recently worked on upgrading our Azure OpenAI integration from gpt-35-turbo to gpt-4o-mini, expecting it to be a straightforward configuration change. Just update the Azure Foundry resource endpoint, change the model name, deploy the code — and voilà, everything should work as before. Right? Not quite. The Unexpected Roadblock As soon as I deployed the updated code, I started seeing 400 status errors from the OpenAI endpoint. The message was cryptic: The response was filtered due to the prompt triggering Azure OpenAI's content management policy. At first, I assumed it was a bug in my SDK call or a malformed payload. But after digging deeper, I realized this wasn’t a technical failure — it was a content safety filter kicking in before the prompt even reached the model. The Prompt That Broke It Here’s the original system prompt that worked perfectly with gpt-35-turbo: YOU ARE A QNA EXTRACTOR IN TEXT FORMAT. YOU WILL GET A SET OF SURVEYJS QNA JSONS. YOU WILL CONVERT THAT INTO A TEXT DOCUMENT. FOR THE QUESTIONS WHERE NO ANSWER WAS GIVEN, MARK THOSE AS NO ANSWER. HERE IS THE QNA: BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE A DOCUMENT TO BE PUBLISHED. {{$style}} +++++ {{$input}} +++++ This prompt had been reliable for months. But with gpt-4o-mini, it triggered Azure’s new input safety layer, introduced in mid-2024. What Changed with GPT-4o-mini? Unlike gpt-35-turbo, the gpt-4o family: Applies stricter content filtering — not just on the output, but also on the input prompt. Treats system messages and user messages as role-based chat messages, passing them through moderation before the model sees them. Flags prompts that look like prompt injection attempts like aggressive instructions like “YOU ARE…”, “BE CREATIVE”, “GENERATE”, “PROFESSIONAL”. Flags unusual formatting (like `+++++`), artificial delimiters or token markers as it may look like encoded content. In short, the model didn’t even get a chance to process my prompt — it was blocked at the gate. Fixing It: Softening the Prompt The solution wasn’t to rewrite the entire logic, but to soften the system prompt and remove formatting that could be misinterpreted. Here’s what helped: - Replacing “YOU ARE…” with a gentler instruction like “Please help convert the following Q&A data…” - Removing creative directives like “BE CREATIVE” or “PROFESSIONAL” unless clearly contextualized. - Avoiding raw JSON markers and template syntax (`{{ }}`, `+++++`) in the prompt. Once I made these changes, the model responded smoothly — and the upgrade was finally complete. Evolving the Prompt — Not Abandoning It Interestingly, for some prompts I didn’t have to completely eliminate the “YOU ARE…” structure. Instead, I refined it to be more natural and less directive. Here’s a comparison: ❌ Old Prompt (Blocked) ✅ New Prompt (Accepted) YOU ARE A SOURCING AND PROCUREMENT MANAGER. YOU WILL GET BUYER'S REQUIREMENTS IN QNA FORMAT. HERE IS THE QNA: {{$input}} +++++ YOU WILL GENERATE TOP 10 {{$category}} RELATED QUESTIONS THAT CAN BE ASKED OF A SUPPLIER IN JSON FORMAT. THE JSON MUST HAVE QUESTION NUMBER AS THE KEY AND QUESTION TEXT AS THE QUESTION. DON'T ADD ANY DESCRIPTION TEXT OR FORMATTING IN THE OUTPUT. BE CREATIVE AND PROFESSIONAL. I WANT TO GENERATE AN RFX. You are an AI assistant that helps clarify sourcing requirements. You will receive buyer's requirements in QnA format. Here is the QnA: {$input} Your task is to generate the top 10 {$category} related questions that can be asked of a supplier, in JSON format. - The JSON must use the question number as the key and the question text as the value. - Do not include any description text or formatting in the output. - Focus on creating clear, professional, and relevant questions that will help prepare an RFX. Key Takeaways - Model upgrades aren’t just about configuration changes — they can introduce new moderation layers that affect prompt design. - Prompt safety filtering is now a first-class citizen in Azure OpenAI, especially for newer models. - System prompts need to be rewritten with moderation in mind, not just clarity or creativity. This experience reminded me that even small upgrades can surface big learning moments. If you're planning to move to gpt-4o-mini or any newer Azure OpenAI model, take a moment to review your prompts — they might need a little more finesse than before.62Views3likes1CommentBYO Thread Storage in Azure AI Foundry using Python
Build scalable, secure, and persistent multi-agent memory with your own storage backend As AI agents evolve beyond one-off interactions, persistent context becomes a critical architectural requirement. Azure AI Foundry’s latest update introduces a powerful capability — Bring Your Own (BYO) Thread Storage — enabling developers to integrate custom storage solutions for agent threads. This feature empowers enterprises to control how agent memory is stored, retrieved, and governed, aligning with compliance, scalability, and observability goals. What Is “BYO Thread Storage”? In Azure AI Foundry, a thread represents a conversation or task execution context for an AI agent. By default, thread state (messages, actions, results, metadata) is stored in Foundry’s managed storage. With BYO Thread Storage, you can now: Store threads in your own database — Azure Cosmos DB, SQL, Blob, or even a Vector DB. Apply custom retention, encryption, and access policies. Integrate with your existing data and governance frameworks. Enable cross-region disaster recovery (DR) setups seamlessly. This gives enterprises full control of data lifecycle management — a big step toward AI-first operational excellence. Architecture Overview A typical setup involves: Azure AI Foundry Agent Service — Hosts your multi-agent setup. Custom Thread Storage Backend — e.g., Azure Cosmos DB, Azure Table, or PostgreSQL. Thread Adapter — Python class implementing the Foundry storage interface. Disaster Recovery (DR) replication — Optional replication of threads to secondary region. Implementing BYO Thread Storage using Python Prerequisites First, install the necessary Python packages: pip install azure-ai-projects azure-cosmos azure-identity Setting Up the Storage Layer from azure.cosmos import CosmosClient, PartitionKey from azure.identity import DefaultAzureCredential import json from datetime import datetime class ThreadStorageManager: def __init__(self, cosmos_endpoint, database_name, container_name): credential = DefaultAzureCredential() self.client = CosmosClient(cosmos_endpoint, credential=credential) self.database = self.client.get_database_client(database_name) self.container = self.database.get_container_client(container_name) def create_thread(self, user_id, metadata=None): """Create a new conversation thread""" thread_id = f"thread_{user_id}_{datetime.utcnow().timestamp()}" thread_data = { 'id': thread_id, 'user_id': user_id, 'messages': [], 'created_at': datetime.utcnow().isoformat(), 'updated_at': datetime.utcnow().isoformat(), 'metadata': metadata or {} } self.container.create_item(body=thread_data) return thread_id def add_message(self, thread_id, role, content): """Add a message to an existing thread""" thread = self.container.read_item(item=thread_id, partition_key=thread_id) message = { 'role': role, 'content': content, 'timestamp': datetime.utcnow().isoformat() } thread['messages'].append(message) thread['updated_at'] = datetime.utcnow().isoformat() self.container.replace_item(item=thread_id, body=thread) return message def get_thread(self, thread_id): """Retrieve a complete thread""" try: return self.container.read_item(item=thread_id, partition_key=thread_id) except Exception as e: print(f"Thread not found: {e}") return None def get_thread_messages(self, thread_id): """Get all messages from a thread""" thread = self.get_thread(thread_id) return thread['messages'] if thread else [] def delete_thread(self, thread_id): """Delete a thread""" self.container.delete_item(item=thread_id, partition_key=thread_id) Integrating with Azure AI Foundry from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential class ConversationManager: def __init__(self, project_endpoint, storage_manager): self.ai_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=project_endpoint ) self.storage = storage_manager def start_conversation(self, user_id, system_prompt): """Initialize a new conversation""" thread_id = self.storage.create_thread( user_id=user_id, metadata={'system_prompt': system_prompt} ) # Add system message self.storage.add_message(thread_id, 'system', system_prompt) return thread_id def send_message(self, thread_id, user_message, model_deployment): """Send a message and get AI response""" # Store user message self.storage.add_message(thread_id, 'user', user_message) # Retrieve conversation history messages = self.storage.get_thread_messages(thread_id) # Call Azure AI with conversation history response = self.ai_client.inference.get_chat_completions( model=model_deployment, messages=[ {"role": msg['role'], "content": msg['content']} for msg in messages ] ) assistant_message = response.choices[0].message.content # Store assistant response self.storage.add_message(thread_id, 'assistant', assistant_message) return assistant_message Usage Example # Initialize storage and conversation manager storage = ThreadStorageManager( cosmos_endpoint="https://your-cosmos-account.documents.azure.com:443/", database_name="conversational-ai", container_name="threads" ) conversation_mgr = ConversationManager( project_endpoint="your-project-connection-string", storage_manager=storage ) # Start a new conversation thread_id = conversation_mgr.start_conversation( user_id="user123", system_prompt="You are a helpful AI assistant." ) # Send messages response1 = conversation_mgr.send_message( thread_id=thread_id, user_message="What is machine learning?", model_deployment="gpt-4" ) print(f"AI: {response1}") response2 = conversation_mgr.send_message( thread_id=thread_id, user_message="Can you give me an example?", model_deployment="gpt-4" ) print(f"AI: {response2}") # Retrieve full conversation history history = storage.get_thread_messages(thread_id) for msg in history: print(f"{msg['role']}: {msg['content']}") Key Highlights: Threads are stored in Cosmos DB under your control. You can attach metadata such as region, owner, or compliance tags. Integrates natively with existing Azure identity and Key Vault. Disaster Recovery & Resilience When coupled with geo-replicated Cosmos DB or Azure Storage RA-GRS, your BYO thread storage becomes resilient by design: Primary writes in East US replicate to Central US. Foundry auto-detects failover and reconnects to secondary region. Threads remain available during outages — ensuring operational continuity. This aligns perfectly with the AI-First Operational Excellence architecture theme, where reliability and observability drive intelligent automation. Best Practices Area Recommendation Security Use Azure Key Vault for credentials & encryption keys. Compliance Configure data residency & retention in your own DB. Observability Log thread CRUD operations to Azure Monitor or Application Insights. Performance Use async I/O and partition keys for large workloads. DR Enable geo-redundant storage & failover tests regularly. When to Use BYO Thread Storage Scenario Why it helps Regulated industries (BFSI, Healthcare, etc.) Maintain data control & audit trails Multi-region agent deployments Support DR and data sovereignty Advanced analytics on conversation data Query threads directly from your DB Enterprise observability Unified monitoring across Foundry + Ops The Future BYO Thread Storage opens doors to advanced use cases — federated agent memory, semantic retrieval over past conversations, and dynamic workload failover across regions. For architects, this feature is a key enabler for secure, scalable, and compliant AI system design. For developers, it means more flexibility, transparency, and integration power. Summary Feature Benefit Custom thread storage Full control over data Python adapter support Easy extensibility Multi-region DR ready Business continuity Azure-native security Enterprise-grade safety Conclusion Implementing BYO thread storage in Azure AI Foundry gives you the flexibility to build AI applications that meet your specific requirements for data governance, performance, and scalability. By taking control of your storage, you can create more robust, compliant, and maintainable AI solutions.Connect AI Agent via postman
I'm having the hardest time trying to connect to my custom agent (Agent_id: asst_g8DVMGAOLiXXk7WmiTCMQBgj) via Postman. I'm able to authenticate fine, and receive the sequre token which I'm able to run my deployment post with no issues (https://aiagentoverview.cognitiveservices.azure.com/openai/deployments/gpt-4.1/chat/completions?api-version=2025-01-01-preview). But how do I run a post to my agent_id: asst_g8DVMGAOLiXXk7WmiTCMQBgj? I cant find any instructions anywhere.32Views0likes2CommentsIssue when connecting from SPFX to Entra-enabled Azure AI Foundry resource
We have been successfully connecting our chat bot from an SPFX to a chat completion model in Azure, using key authentication. We have a requirement now to disable key authentication. This is what we've done so far: disabled API authentication in the resource Gave to the SharePoint Client Extensibility Web Application Principal "Cognitive Services OpenAI User", "Cognitive Service User" and "Cognitive Data Reader" permission in the resource In the SPFX we have added the following in the package-solution.json (and we have approved it in the SharePoint admin site): "webApiPermissionRequests": [ { "resource": "Azure Machine Learning Services", "scope": "user_impersonation" } ] To connect to the chat completion API we're using fetchEventSource from '@microsoft/fetch-event-source', so we're getting a Bearer token using AadTokenProviderFactory from "@microsoft/sp-http", e.g.: // preceeded by some code to get the tokenProvider from aadTokenProviderFactory const token = await tokenProvider.getToken('https://ai.azure.com'); const url = "https://my-ai-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview"; await fetchEventSource(url, { method: 'POST', headers: { Accept: 'text/event-stream', 'Content-type': 'application/json', Authorization: `Bearer ${token}` }, body: body, ...// truncated We added the users (let's say, email address removed for privacy reasons) in the resource as an Azure AI User. When we try to get this to work, we get the following error: The principal `email address removed for privacy reasons` lacks the required data action `Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action` to perform `POST /openai/deployments/{deployment-id}/chat/completions` operation. How can we make this work? Ideally we would prefer the SPFX principal to do the request to the chat completion API, without needed to have to add end users in the resource thorugh IAC, but my understanding is that AadTokenProviderFactory only issues delegated access tokens.15Views0likes0CommentsAzure OpenAI: GPT-5-Codex Availability?
Greetings everyone! I just wanted to see if there's any word as to when/if https://openai.com/index/introducing-upgrades-to-codex/ will make it's way to the AI Foundry. It was released on September 15th, 2025, but I have no idea how long Azure tends to follow behind OpenAI's releases. It doesn't really seem like there's any source of information to view whenever new models drop as to what Azure is going to do with them, if any. Any conversation around this would be helpful and appreciated, thanks!506Views5likes2CommentsAgent in Azure AI Foundry not able to access SharePoint data via C# (but works in Foundry portal)
Hi Team, I created an agent in Azure AI Foundry and added a knowledge source using the SharePoint tool. When I test the agent inside the Foundry portal, it works correctly; it can read from the SharePoint site and return file names/data. However, when I call the same agent using C# code, it answers normal questions fine, but whenever I ask about the SharePoint data, I get the error: Sorry, something went wrong. Run status: failed I also referred to the official documentation and sample here: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/sharepoint-samples?pivots=rest I tried the cURL samples as well, and while the agent is created successfully, the run status always comes back as failed. Has anyone faced this issue? Do I need to configure something extra for SharePoint when calling the agent programmatically (like additional permissions or connection binding)? Any help on this would be greatly appreciated. Thanks!112Views0likes1CommentPush for Rapid AI Growth
There is a key factors of why AI is not growing as quick as speed of light, the reason is because most AI are either built by a specific company (e.g Open AI for chatgpt, Microsoft for Copilot, Google for Gemini). or individuals/small groups building agents for fun or for their workplaces. But what would happen if we merge them together. Imagine, if a website that is own by no one and it is open source and it allows everyone to train the same AI simultaneously at the same time, what would happen. Imagine instead of Microsoft building Copilot, the whole world is building Copilot at the same time, training Copilot at the same time through all global computing power. This would led to an shocking and exponential growth of AI never seen before. This is why I think Copilot should allow everyone to train its AI.148Views1like1CommentFrom Space to Subsurface: Using Azure AI to Predict Gold Rich Zones
In traditional mineral exploration, identifying gold bearing zones can take months of fieldwork and high cost drilling often with limited success. In our latest project, we flipped the process on its head by using Azure AI and Satellite data to guide geologists before they break ground. Using Azure AI and Azure Machine Learning, we built an intelligent, automated pipeline that identified high potential zones from geospatial data saving time, cost, and uncertainty. Here’s a behind the scenes look at how we did it.👇 📡 Step 1: Translating Satellite Imagery into Features We began with Sentinel-2 imagery covering our Area of Interest (AOI) and derived alteration indices commonly used in mineral exploration, including: 🟤 Clay Index – proxies for hydrothermal alteration 🟥 Fe (Iron Oxide) Index 🌫️ Silica Ratio 💧 NDMI (Normalized Difference Moisture Index) Using Azure Notebooks and Python, we processed and cleaned the imagery, transforming raw reflectance bands into meaningful geochemical features. 🔍 Step 2: Discovering Patterns with Unsupervised Learning (KMeans) With feature rich geospatial data prepared, we used unsupervised clustering (KMeans) in Azure Machine Learning Studio to identify natural groupings across the region. This gave us a first look at the terrain’s underlying geochemical structure one cluster in particular stood out as a strong candidate for gold rich zones. No geology degree needed: AI finds patterns humans can't see :) 🧠 Step 3: Scaling with Azure AutoML We then trained a classification model using Azure AutoML to predict these clusters over a dense prediction grid: ✅ 7,200+ data points generated ✅ ~50m resolution grid ✅ 14 km² area of interest This was executed as a short, early stopping run to minimize cost and optimize training time. Models were trained, validated, and registered using: Azure Machine Learning Compute Instance + Compute Cluster Azure Storage for dataset access 🔬 Step 4: Validation with Field Samples To ground our predictions, we validated against lab assayed (gold concentration) from field sampling points. The results? 🔥 The geospatial cluster labeled 'Class 0' by the model showed strong correlation with lab validated gold concentrations, supporting the model's predictive validity. This gave geologists AI augmented evidence to prioritize areas for further sampling and drilling. ⚖️ Traditional vs AI-based Workflow 🚀 Why Azure? ✅ Azure Machine Learning Studio for AutoML and experiment tracking ✅ Azure Storage for seamless access to geospatial data ✅ Azure OpenAI Service for advanced language understanding, report generation, and enhanced human AI interaction ✅ Azure Notebooks for scripting, preprocessing, and validation ✅ Azure Compute Cluster for scalable, cost effective model training ✅ Model Registry for versioning and deployment 🌍 Key Takeaways AI turns mineral exploration from reactive guesswork into proactive intelligence. In our workflow, AI plays a critical role by: ✅ Extracting key geochemical features from satellite imagery 🧠 Identifying patterns using unsupervised learning 🎯 Predicting high potential zones through automated classification 🌍 Delivering full spatial coverage at scale With Azure AIand Azure ML tools, we’ve built a complete pipeline that supports: End to end automation; from data prep to model deployment Faster, more accurate exploration with lower costs A reusable, scalable solution for global teams This isn’t just a proof of concept, it’s a production ready framework that empowers geologists with AI driven insights before the first drill hits the ground. 🔗 If you're working in Mining industry, geoscience, AI for Earth, or exploration tech, let’s connect! We’re on a mission to bring AI deeper into every industry through strategic partnerships and collaborative innovation.136Views2likes0CommentsIntroducing Azure AI Models: The Practical, Hands-On Course for Real Azure AI Skills
Hello everyone, Today, I’m excited to share something close to my heart. After watching so many developers, including myself—get lost in a maze of scattered docs and endless tutorials, I knew there had to be a better way to learn Azure AI. So, I decided to build a guide from scratch, with a goal to break things down step by step—making it easy for beginners to get started with Azure, My aim was to remove the guesswork and create a resource where anyone could jump in, follow along, and actually see results without feeling overwhelmed. Introducing Azure AI Models Guide. This is a brand new, solo-built, open-source repo aimed at making Azure AI accessible for everyone—whether you’re just getting started or want to build real, production-ready apps using Microsoft’s latest AI tools. The idea is simple: bring all the essentials into one place. You’ll find clear lessons, hands-on projects, and sample code in Python, JavaScript, C#, and REST—all structured so you can learn step by step, at your own pace. I wanted this to be the resource I wish I’d had when I started: straightforward, practical, and friendly to beginners and pros alike. It’s early days for the project, but I’m excited to see it grow. If you’re curious.. Check out the repo at https://github.com/DrHazemAli/Azure-AI-Models Your feedback—and maybe even your contributions—will help shape where it goes next!Solved783Views1like5CommentsIntroducing AzureImageSDK — A Unified .NET SDK for Azure Image Generation And Captioning
Hello 👋 I'm excited to share something I've been working on — AzureImageSDK — a modern, open-source .NET SDK that brings together Azure AI Foundry's image models (like Stable Image Ultra, Stable Image Core), along with Azure Vision and content moderation APIs and Image Utilities, all in one clean, extensible library. While working with Azure’s image services, I kept hitting the same wall: Each model had its own input structure, parameters, and output format — and there was no unified, async-friendly SDK to handle image generation, visual analysis, and moderation under one roof. So... I built one. AzureImageSDK wraps Azure's powerful image capabilities into a single, async-first C# interface that makes it dead simple to: 🎨 Inferencing Image Models 🧠 Analyze visual content (Image to text) 🚦 Image Utilities — with just a few lines of code. It's fully open-source, designed for extensibility, and ready to support new models the moment they launch. 🔗 GitHub Repo: https://github.com/DrHazemAli/AzureImageSDK Also, I've posted the release announcement on the https://github.com/orgs/azure-ai-foundry/discussions/47 👉🏻 feel free to join the conversation there too. The SDK is available on NuGet too. Would love to hear your thoughts, use cases, or feedback!123Views1like0Comments