Forum Discussion
Whisper-1 Model Transcribes English Audio Incorrectly
Well, in Azure.AI.OpenAI v2.2.0-beta.4, the ConversationInputTranscriptionOptions class does not yet expose a Language property, even though Whisper itself (at the API level) does support it.
However, Since the underlying API does support specifying the language, you can bypass the SDK and make a direct POST request with the language parameter in the payload.
https://{your-resource-name}.openai.azure.com/openai/deployments/whisper-1/audio/transcriptions?api-version=2024-02-15-preview
POST Body:
{
"file": "<your-audio-file>",
"model": "whisper-1",
"language": "en"
}
This will lock the transcription to English and eliminate the language auto-detection issue.
TL;DR:
- SDK version v2.2.0-beta.4 doesn't yet support the Language field.
- Use REST API to explicitly set language: "en" for now.
- PrathameshDeshmukhJul 04, 2025Copper Contributor
Thanks for your suggestion! I appreciate the workaround using the REST API to explicitly set the "language" parameter for Whisper transcription.
However, in my case, I’m using the Azure.AI.OpenAI SDK (v2.2.0-beta.4), which handles API calls internally. Unfortunately, the ConversationInputTranscriptionOptions class in this SDK doesn’t currently expose a Language property, so I can't directly pass the "language" parameter through the SDK.
Since the SDK abstracts away the HTTP layer, I can't inject custom parameters like "language" into the request body unless the SDK itself supports it. So it’s not applicable in my current setup.