azure openai service
3 TopicsAzure OpenAI - If I add a data source, it no longer follows my instructions on how to respond?
Hi everyone, I've been messing with Azure OpenAI for a few days and find it pretty cool. One issue I have though is I will give it example prompts, and fill in the "Give the model instructions and context" section which works great, but when you add a data source like Azure Blob, it ONLY uses that and ignores all instructions. I have "limit responses to your data content" under data source disabled. Is this normal behaviour? The second issue is that even with this unchecked above, it still only responds based on the documents. For example, I'll ask about a product and it responds based on my documents. I'll then ask if a competitor company also offers this product, and it will say that it doesn't know as that wasn't found in the documents. I tell it to search online and it says it's not capable of searching online and can only provide information based on the documents I've provided." Just a bit confused by that since using ChatGPT directly would have answered that question. My goal is to have it be a general AI like ChatGPT, as in, it will answer general questions, but ALSO reference our documents. Is that not possible, like is it one or the other?566Views0likes1CommentWeird problem when comparing the answers from chat playground and answer from api
I'm running into a weird issue with Azure AI Foundry (gpt-4o-mini) and need help. I'm building a chatbot that classifies each user message into: follow-up to previous message repeat of an earlier message brand-new query The classification logic works perfectly in the Azure AI Foundry Chat Playground. But when I use the exact same prompt in Python via: AzureChatOpenAI() (LangChain) or the official Azure OpenAI code from "View Code" (client.chat.completions.create()) …I get totally different and often wrong results. I’ve already verified: same deployment name (gpt-4o-mini) same temperature / top_p / max_tokens same system and user messages even tried copy-pasting the full system prompt from the Playground But the API version still behaves very differently. It feels like Azure AI Foundry’s Chat Playground is using some kind of hidden system prompt, invisible scaffolding, or extra formatting that is NOT shown in the UI and NOT included in the “View Code” snippet. The Playground output is consistently more accurate than the raw API call. Question: Does the Chat Playground apply hidden instructions or pre-processing that we can’t see? And is there any way to: view those hidden prompts, or replicate Playground behavior exactly through the API or LangChain? If anyone has run into this or knows how to get identical behavior outside the Playground, I’d really appreciate the help.19Views0likes1Comment