Recent Discussions
Introducing Azure AI Models: The Practical, Hands-On Course for Real Azure AI Skills
Hello everyone, Today, I’m excited to share something close to my heart. After watching so many developers, including myself—get lost in a maze of scattered docs and endless tutorials, I knew there had to be a better way to learn Azure AI. So, I decided to build a guide from scratch, with a goal to break things down step by step—making it easy for beginners to get started with Azure, My aim was to remove the guesswork and create a resource where anyone could jump in, follow along, and actually see results without feeling overwhelmed. Introducing Azure AI Models Guide. This is a brand new, solo-built, open-source repo aimed at making Azure AI accessible for everyone—whether you’re just getting started or want to build real, production-ready apps using Microsoft’s latest AI tools. The idea is simple: bring all the essentials into one place. You’ll find clear lessons, hands-on projects, and sample code in Python, JavaScript, C#, and REST—all structured so you can learn step by step, at your own pace. I wanted this to be the resource I wish I’d had when I started: straightforward, practical, and friendly to beginners and pros alike. It’s early days for the project, but I’m excited to see it grow. If you’re curious.. Check out the repo at https://github.com/DrHazemAli/Azure-AI-Models Your feedback—and maybe even your contributions—will help shape where it goes next!93Views1like2CommentsWhisper-1 Model Transcribes English Audio Incorrectly
Hi everyone, I'm currently working with the gpt-4o-realtime-preview model from Azure OpenAI and using the whisper-1 model for audio-to-text transcription. However, I'm encountering a recurring issue where the transcription frequently fails to detect the correct language. Even though I provide clear English audio, the output is often transcribed in other languages such as Hindi, Urdu, or Chinese. This inconsistency is affecting the reliability of the transcription process. Here’s a snippet of the code I’m using: ConversationSessionOptions sessionOptions = new() { Voice = ConversationVoice.Alloy, InputAudioFormat = ConversationAudioFormat.Pcm16, OutputAudioFormat = ConversationAudioFormat.Pcm16, Instructions = instructions, InputTranscriptionOptions = new() { Model = "whisper-1", }, }; Is there a way to explicitly specify or prompt the whisper-1 model to prioritize or lock in English as the transcription language? Any guidance on how to improve language detection accuracy would be greatly appreciated. Thanks in advance! Tag Like29Views0likes0CommentsSupercharging Solution Architecture with GitHub Copilot Prompts Every Architect Should Know
As a Solution Architect, you’re often juggling high-level system design, reviewing code, drafting technical documentation, and ensuring that your solutions meet both business and technical requirements. GitHub Copilot, powered by advanced AI, isn’t just for developers—it can be a powerful assistant for Solution Architects too. In this blog, we’ll explore how you can craft GitHub Copilot prompts to accelerate your architectural workflow, design decisions, and documentation. https://dellenny.com/supercharging-solution-architecture-with-github-copilot-prompts-every-architect-should-know/38Views0likes0CommentsAzure AI Foundry/Azure AI Service - cannot access agents
I'm struggling with getting agents to work via API which were defined in AI Foundry (based on Azure AI Service). When defining agent in project in AI Foundry I can use it in playground via web browser. The issue appears when I'm trying to access them via API (call from Power Automate). When executing Run on agent I get info that agent cannot be found. The issue doesn't exist when using Azure OpenAI and defining assistants. I can use them both via API and web browser. I guess that another layer of management which is project might be an issue here. I saw usage of SDK in Python and first call is to connect to a project and then get an agent. Does anyone of you experienced the same? Is a way to select and run agent via API?194Views0likes2CommentsExploring Azure AI Foundry's Model Router: How It Automatically Optimizes Costs and Performance
A few days ago, I stumbled upon Azure AI Foundry's Model Router (preview) and was fascinated by its promise: a single deployment that automatically selects the most appropriate model for each query. As a developer, this seemed revolutionary no more manually choosing between GPT ( at the moment only work with OpenAI family), or the new o-series reasoning models. I decided to conduct a comprehensive analysis to truly understand how this intelligent router works and share my findings with the community. What is Model Router? Model Router is essentially a "meta-model" that acts like an orchestra conductor. When you send it a query, it evaluates in real-time factors such as: Query complexity Whether deep reasoning is required Necessary context length Request parameters It then routes your request to the most suitable model, optimizing both cost and performance. Test I developed a Python script that performs over 50 different tests, grouped into 5 main categories. Here's what I discovered (I´m form Spain, so i tested in Spanish. Sorry for that) The router proved to be surprisingly intelligent. For simple questions like "What is the capital of France?", it consistently selected more economical . But when I posed complex math or programming problems, it automatically scaled up to GPT-4 or even o-series reasoning models. Advantages I Found: Automatic cost optimization: Significant savings by using economical models when possible No added complexity: A single endpoint for all your needs Better performance: o-series models activate automatically for complex problems Transparency: You can always see which model was used in response.model Billing information When you use model router today, you're only billed for the use of the underlying models as they're recruited to respond to prompts: the model routing function itself doesn't incur any extra charges. Starting August 1, the model router usage will be charged as well. You can monitor the costs of your model router deployment in the Azure portal.Azure OpenAI - GPT-4.1 + tools/image_generation doesn't work
Hi, response = client.responses.create(model='gpt-4.1', input=prompt, tools=[{'type': 'image_generation'}]) When used with Azure/OpenAI it fails with: openai.BadRequestError: Error code: 400 - {'error': {'message': 'There was an issue with your request. Please check your inputs and try again', 'type': 'invalid_request_error', 'param': None, 'code': None}} It works fine with OpenAI direct.44Views0likes0CommentsIntegrate a futuristic, AI-driven feature into any existing Microsoft product
If you were given the chance to integrate a futuristic, AI-driven feature into any existing Microsoft product (like Excel, Teams, Azure, VS Code, or even PowerPoint), something that no one has ever attempted before, what would it be, and why? Think beyond automation and chatbots. I’m talking about concepts that could feel like science fiction today but might just be possible tomorrow. Would love to hear what wild ideas the community has!103Views2likes5CommentsPrompt management?
Is anyone writing agent or LLM API call apps in Python or C# that *avoid* inline prompts? What's your approach? Loading from files, blob storage or using other solutions? Any experience with or comparable Azure AI approaches similar to: - LangChain / LangSmith load_prompt and prompt client push and pull to their Hub - Amazon Bedrock Converse API - PromptLayer - Other? It doesn't seem like there are good project or folder conventions for AI agents, etc. Code samples are inline prompt spaghetti. It's like web apps before MVC frameworks. Who should write and own prompts in an enterprise? Versioning, maybe signing? I see that Azure AI has prompt and evaluation tools, but not seeing a way to get at these with an API and SDK. Also, GitHub Models just released something, but has say limits right now. And MCP is taking off with its approach to Prompts and Roots.27Views0likes0CommentsPush for Rapid AI Growth
There is a key factors of why AI is not growing as quick as speed of light, the reason is because most AI are either built by a specific company (e.g Open AI for chatgpt, Microsoft for Copilot, Google for Gemini). or individuals/small groups building agents for fun or for their workplaces. But what would happen if we merge them together. Imagine, if a website that is own by no one and it is open source and it allows everyone to train the same AI simultaneously at the same time, what would happen. Imagine instead of Microsoft building Copilot, the whole world is building Copilot at the same time, training Copilot at the same time through all global computing power. This would led to an shocking and exponential growth of AI never seen before. This is why I think Copilot should allow everyone to train its AI.15Views0likes0CommentsPredictions for Artificial Intelligence in next 2-3 years!!!!
2025 - start of agentic AI -Oct 2025: Chatgpt 5 get released (proven to be 10000x times more powerful than chatgpt 4 and can run task automatically) 2026 AI benchmark matches human, beginning of Artificial general intelligence 2027 A new website called letsbuiltai is open source and encourages everyone to train AI. Instead of you training your own AI or an Ai company training their own AI. This would involves everyone training a particular AI simultaneously, paving way for faster and quicker AI growth36Views0likes0CommentsIntroducing AzureImageSDK — A Unified .NET SDK for Azure Image Generation And Captioning
Hello 👋 I'm excited to share something I've been working on — AzureImageSDK — a modern, open-source .NET SDK that brings together Azure AI Foundry's image models (like Stable Image Ultra, Stable Image Core), along with Azure Vision and content moderation APIs and Image Utilities, all in one clean, extensible library. While working with Azure’s image services, I kept hitting the same wall: Each model had its own input structure, parameters, and output format — and there was no unified, async-friendly SDK to handle image generation, visual analysis, and moderation under one roof. So... I built one. AzureImageSDK wraps Azure's powerful image capabilities into a single, async-first C# interface that makes it dead simple to: 🎨 Inferencing Image Models 🧠 Analyze visual content (Image to text) 🚦 Image Utilities — with just a few lines of code. It's fully open-source, designed for extensibility, and ready to support new models the moment they launch. 🔗 GitHub Repo: https://github.com/DrHazemAli/AzureImageSDK Also, I've posted the release announcement on the Azure AI Foundry's GitHub Discussions 👉🏻 feel free to join the conversation there too. The SDK is available on NuGet too. Would love to hear your thoughts, use cases, or feedback!55Views0likes0CommentsIntroducing AzureSoraSDK: A Community C# SDK for Azure OpenAI Sora Video Generation
Hello everyone! I’m excited to share the first community release of AzureSoraSDK, a fully-featured .NET 6+ class library that makes it incredibly easy to generate AI-driven videos using Azure’s OpenAI Sora model and even improve your prompts on the fly. 🔗 Repository: https://github.com/DrHazemAli/AzureSoraSDK114Views0likes2CommentsUnderstanding Azure OpenAI Service Provisioned Reservations
Hello Team, We are building a Azure OpenAI based finetuned model making use of GPT 4o-mini for long run. Wanted to understand the costing, here we came up with the following question over Azure OpenAI Service Provisioned Reservations plan PTU units. Need to understand how it works: Is there any Token Quota Limit Provisioned finetuned model deployment? How many finetuned model with Provisioned capacity can be deployed under the plan, How will the pricing affect if we deploy multiple finetune model? Model Deployment - GPT 4o-mini finetuned Region - North Central US We are doing it for our enterprise customer, kindly help us to resolve this issue.Solved210Views1like6CommentsIs AI Foundry in new exam for DP-100
A 25-30% of the DP-100 exam is now dedicated to Optimizing Language Models for AI Applications - is this requiring Azure AI Foundry? It doesn't say specifically in the study guide: https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/dp-100 Also, the videos could benefit from being updated to cover the changes as of 16 January 2025.275Views2likes1CommentUsing AI to convert unstructured information to structured information
We have a use case to extract the information from various types of documents like Excel, PDF, and Word and convert it into structured information. The data exists in different formats. We started building this use case with AI Builder, and we hit the roadblock and are now exploring ways using the Co-pilot studio. It would be great if someone could point us in the right direction. What should be the right technology stack that we should consider for this use case? Thank you for the pointer.1KViews4likes16CommentsAzure ML Studio - Attached Compute Challenges
Hello community, I'm new to ML services and have been exploring the ML Studio the last while to understand it better from an infrastructure point of view. I understand that I should be able to attach an existing VM (Ubuntu) running in my Azure environment, and use this as a compute resource in the ML Studio. I've come across two challenges, and I would appreciate your help. I'm sure perhaps I am just missing something small. Firstly, I would like to connect to my virtual machine over a private endpoint. What I have tried is to create the private endpoint to my VM following the online guidance (Configure a private endpoint - Azure Machine Learning | Microsoft Learn). Both the VM and the endpoints are on the same subnet on the same vNet, yet, it is unable to attach the compute. It seems to still default to the public IP of the VM, which is not what I am after. I have the SSH port configured to port 22 still, and I have tried several options on my NSG to configure the source and destination information (Service Tags, IP address, etc.), but with no luck. Am I missing something? Is attaching an existing VM for compute over a private endpoint a supported configuration, or does the private endpoint only support compute created out of the ML Studio compute section? Secondly, if I forget about the private endpoint and attach the VM directly over internet (not desired, obviously), it is not presented to me as a compute option when I try to run my Jupyter Notebook. I only have "Azure Machine Learning Serverless Spark" as a compute option, or any compute that was indeed created through the ML Studio. I don't have the option to select the existing VM that was attached from Azure. Again, is there a fundamental step or limitation that I am overlooking? Thanks in advanceSolved99Views0likes3CommentsPacketMind: My Take on Building a Smarter DPI Tool with Azure AI
Just wanted to share a small but meaningful project I recently put together PacketMind. It’s a lightweight Deep Packet Inspection (DPI) tool designed to help detect suspicious network traffic using Azure’s AI capabilities. And, honestly, this project is a personal experiment that stemmed from one simple thought: Why does DPI always have to be bulky, expensive, and stuck in legacy systems? I mean, think about it. Most of the time, we have to jump through hoops just to get basic packet inspection features, let alone advanced AI-powered traffic analysis. So I figured – let’s see how far we can go by combining Azure’s language models with some good old packet sniffing on Linux. What’s Next? Let’s be honest – PacketMind is an early prototype. There’s a lot I’d love to add: - GUI Interface for easier use - Custom Model Integration (right now it’s tied to a specific Azure model) - More Protocol Support – think beyond HTTP/S - Alerting Features – maybe even Slack/Discord hooks But for now, I’m keeping it simple and focusing on making the core functionality solid. Why Share This? You know, I could’ve just kept this as a side project on my machine, but sharing is part of the fun. If even one person finds PacketMind useful or gets inspired to build something similar, I’ll consider it a win. So, if you’re into networking, AI, or just like to mess with packet data for fun – check it out. Fork it, test it, break it, and let me know how you’d make it better. Here’s the repo: https://github.com/DrHazemAli/packetmind Would love to hear your thoughts, suggestions, or just a thumbs up if you think it’s cool. Cheers!41Views1like0Comments
Events
Recent Blogs
- Discover how AI-powered agents are revolutionizing customer engagement—enhancing real-time support, automating workflows, and empowering human professionals with intelligent orchestration. Explore th...Jun 30, 2025119Views0likes0Comments
- 4 MIN READThis blog post introduces a new adaptive framework for distributed databases that leverages Graph Neural Networks (GNNs) and causal inference to overcome the classic limitations imposed by the CAP th...Jun 27, 2025107Views0likes0Comments