azure
172 TopicsAMA: Azure AI Foundry Voice Live API: Build Smarter, Faster Voice Agents
Join us LIVE in the Azure AI Foundry Discord on the 14th October, 2025, 10am PT to learn more about Voice Live API Voice is no longer a novelty, it's the next-gen interface between humans and machines. From automotive assistants to educational tutors, voice-driven agents are reshaping how we interact with technology. But building seamless, real-time voice experiences has often meant stitching together a patchwork of services: STT, GenAI, TTS, avatars, and more. Until now. Introducing Azure AI Foundry Voice Live API Launched into general availability on October 1, 2025, the Azure AI Foundry Voice Live API is a game-changer for developers building voice-enabled agents. It unifies the entire voice stack—speech-to-text, generative AI, text-to-speech, avatars, and conversational enhancements, into a single, streamlined interface. That means: ⚡ Lower latency 🧠 Smarter interactions 🛠️ Simplified development 📈 Scalable deployment Whether you're prototyping a voice bot for customer support or deploying a full-stack assistant in production, Voice Live API accelerates your journey from idea to impact. Ask Me Anything: Deep Dive with the CoreAI Speech Team Join us for a live AMA session where you can engage directly with the engineers behind the API: 🗓️ Date: 14th Oct 2025 🕒 Time: 10am PT 📍 Location: https://aka.ms/foundry/discord See the EVENTS 🎤 Speakers: Qinying Liao, Principal Program Manager, CoreAI Speech Jan Gorgen, Senior Program Manager, CoreAI Speech They’ll walk through real-world use cases, demo the API in action, and answer your toughest questions, from latency optimization to avatar integration. Who Should Attend? This AMA is designed for: AI engineers building multimodal agents Developers integrating voice into enterprise workflows Researchers exploring conversational UX Foundry users looking to scale voice prototypes Why It Matters Voice Live API isn’t just another endpoint, it’s a foundation for building natural, responsive, and production-ready voice agents. With Azure AI Foundry’s orchestration and deployment tools, you can: Skip the glue code Focus on experience design Deploy with confidence across platforms Bring Your Questions Curious about latency benchmarks? Want to know how avatars sync with TTS? Wondering how to integrate with your existing Foundry workflows? This is your chance to ask the team directly.How to Master GitHub Copilot: Build, Prompt, Deploy Smarter
Mastering GitHub Copilot: Build, Prompt, Deploy Smarter is a free, hands-on workshop designed to help developers go beyond autocomplete and unlock the true power of AI-assisted coding. Instead of toy examples, this course walks you through real-world software engineering challenges: messy codebases, multi-language projects, cloud deployments, and legacy system upgrades. You’ll learn practical skills like prompt engineering, advanced Copilot features, and AI pair programming techniques that make you faster, sharper, and more creative. Whether you’re a junior developer or a seasoned architect, mastering GitHub Copilot will help you: Reduce cognitive load and focus on system design Accelerate onboarding for new engineers Write cleaner, more consistent code Automate repetitive tasks to free up time for innovation AI coding tools like GitHub Copilot are no longer optional—they’re essential. This workshop gives you the skills to collaborate with Copilot effectively and stay competitive in the age of AI-powered development.Talk to your data: PostgreSQL gets a voice in VS Code
Talk to your data: PostgreSQL gets a voice in VS Code As product roadmaps accelerate to meet new business needs, developer teams are struggling to maintain productivity, some even reporting that they need to use six or more tools to do their job. To address this, Microsoft has released an improved PostgreSQL extension to aid developers working with PostgreSQL in Visual Studio (VS) Code streamline their workflows and increase productivity. PostgreSQL and VS Code are top choices among developers, but, until now, using PostgreSQL and VS Code together meant constantly Alt-Tabbing or copying queries between windows. The new extension, now in public preview, brings full database management and query capabilities into VS Code, right alongside your code. You can connect to PostgreSQL databases, run queries, explore schemas, and even get AI assistance to talk to your data. The extension integrates with Azure Database for PostgreSQL and supports local databases, so it works whether you’re working with a cloud database or a container on your machine. Plus, features like context-aware IntelliSense and a built-in AI Copilot agent help you write and optimize SQL queries, so you can focus on building your application instead of wrestling with disconnected tools.Practical ways to use AI in your Data Science and ML journey
Are you ready to take your learning from “just reading” to “actively doing”? We are launching a series of interactive events designed to help you bridge the gap between theory and practice, connect with peers, and learn directly from experts, all in a vibrant, supportive Discord community. The series will guide you on practical ways to use AI in your learning journey with Data Science and Machine Learning. Join the series on Discord at: https://aka.ms/ds4beginners/discord Why You Should Join Guided Pathways: Move seamlessly from the popular Data Science for Beginners and Machine Learning for Beginners curriculums to interact with fellow learners in the Azure AI Foundry Discord community. We’ve designed study plans, office hours sessions, and showcases so you can learn, ask questions, and get feedback in real time. Practical AI Skills: See how Artificial Intelligence can supercharge your learning. . Community & Collaboration: Find peer learners with similar goals, collaborate on projects, and get support from experienced moderators and MVPs. Machine Learning AMA 1 Theme: Learning Machine Learning with AI, responsibly. Date: Tuesday, 23 September (10AM PST | 1PM ET | 5PM GMT | 8PM EAT) What to Expect: Curriculum overview, hands-on AI demos, critique and correct a simple classifier, and peer collaboration. Link: https://aka.ms/learnwithai/ml Data Science AMA 2 Theme: Learning Data Science with AI, responsibly. Date: Tuesday, 30 September (10AM PST | 1PM ET | 5PM GMT | 8PM EAT) What to Expect: Curriculum overview, integrating AI into your learning, live Q&A, and practical tips for responsible AI use. Link: https://aka.ms/learnwithai/ds Showcase 1: GitHub Copilot for Data Science Theme: Hands-on AI Agents for Data Scientists Date: Thursday, 25 September (10AM PST | 1PM ET | 5PM GMT | 8PM EAT) What to Expect: Learn to use Copilot’s advanced features in Python and Jupyter Notebooks, customize agents, and extend your analyses. Link: https://aka.ms/learnwithai/agents How to Get Involved Join the Discord Channels: https://aka.ms/ds4beginners/discord https://aka.ms/ml4beginners/discord Prepare: Make sure you have a GitHub account, VS Code, Data Wrangler, ready for hands-on sessions. What Makes This Program Different? Discord first learning: Experience higher engagement and deeper connections. Curriculum-anchored activities: Every event is designed to build on what you’ve learned in the open-source curriculums. AI-powered study plans and resources: Get starter notebooks, prompt cards, and recap templates to accelerate your progress. Resources Hands-on workshop: https://aka.ms/ds-agents Data Science for beginner's curriculum: https://aka.ms/ds4beginners Machine Learning for beginner's curriculum: https://aka.ms/ml4beginners Ready to transform your learning journey? Join us for these exciting events and join us in the Azure AI Foundry Discord community. Whether you’re just starting out or looking to deepen your skills, there’s a place for you here!Zone redundancy is now enabled by default in Azure Container Registry
We’re excited to announce a major resiliency enhancement for Azure Container Registry (ACR): Zone redundancy is now enabled by default for all ACR SKUs in regions that support Availability Zones. This means your ACR resources are now more resilient automatically and at no additional cost. What’s Changed? Previously, zone redundancy was only available for registries using the Premium SKU. With this update, all SKUs, including Basic and Standard, now benefit from zone redundancy in supported regions. This change has been applied automatically to all new and existing registries in regions that support Availability Zones, with no action required from you. What Does This Mean for You? Zone redundancy ensures that your registry’s data is replicated across multiple physical zones within a region. This means: Improved Resilience: Your container artifacts stored in Azure Container Registry are now protected against single-zone outages. Zero Configuration Required: If your registry is in a supported region, it’s automatically zone redundant. No Extra Cost: This enhancement is included at no additional charge for all ACR SKUs. Current Experience At this time, the Azure portal and CLI may not reflect this change accurately. Specifically, the zone redundancy property in your registry’s configuration may still appear as false. However, zone redundancy is enabled for all registries in supported regions. We’re actively working to update the portal and API surfaces to reflect this new default behavior more transparently. Also, all enabled features before the migration will continue to work with this new update as expected. For a full list of regions with zone redundancy supported please see: List of Azure regions | Microsoft Learn. FAQ Q: What’s the benefit for me? A: Your registry is now more resilient to single zone failures, improving resiliency and reliability, without any changes on your part. Q: How much does this cost? A: This upgrade is included at no additional cost across all SKUs. Q: Do I need a specific SKU to benefit from this? A: No, in Availability Zone enabled regions, zone redundancy is now enabled by default on all SKUs. Q: Which regions support zone redundancy? A: You can find the full list of supported regions here: List of Azure regions | Microsoft Learn.508Views4likes0CommentsLevel Up Your Python Game with Generative AI Free Livestream Series This October!
If you've been itching to go beyond basic Python scripts and dive into the world of AI-powered applications, this is your moment. Join Pamela Fox and Gwyneth Peña-Siguenza Gwthrilled to announce a brand-new free livestream series running throughout October, focused on Python + Generative AI and this time, we’re going even deeper with Agents and the Model Context Protocol (MCP). Whether you're just starting out with LLMs or you're refining your multi-agent workflows, this series is designed to meet you where you are and push your skills to the next level. 🧠 What You’ll Learn Each session is packed with live coding, hands-on demos, and real-world examples you can run in GitHub Codespaces. Here's a taste of what we’ll cover: 🎥 Why Join? Live coding: No slides-only sessions — we build together, step by step. All code shared: Clone and run in GitHub Codespaces or your local setup. Community support: Join weekly office hours and our AI Discord for Q&A and deeper dives. Modular learning: Each session stands alone, so you can jump in anytime. 🔗 Register for the full series 🌍 ¿Hablas español? We’ve got you covered! Gwyneth Peña-Siguenza will be leading a parallel series in Spanish, covering the same topics with localized examples and demos. 🔗 Regístrese para la serie en español Whether you're building your first AI app or architecting multi-agent systems, this series is your launchpad. Come for the code, stay for the community — and leave with a toolkit that scales. Let’s build something brilliant together. 💡 Join the discussions and share your exprience at the Azure AI Discord CommunityGPT-5: Will it RAG?
OpenAI released the GPT-5 model family last week, with an emphasis on accurate tool calling and reduced hallucinations. For those of us working on RAG (Retrieval-Augmented Generation), it's particularly exciting to see a model specifically trained to reduce hallucination. There are five variants in the family: gpt-5 gpt-5-mini gpt-5-nano gpt-5-chat: Not a reasoning model, optimized for chat applications gpt-5-pro: Only available in ChatGPT, not via the API As soon as GPT-5 models were available in Azure AI Foundry, I deployed them and evaluated them inside our most popular open source RAG template. I was immediately impressed - not by the model's ability to answer a question, but by it's ability to admit it could not answer a question! You see, we have one test question for our sample data (HR documents for a fictional company's) that sounds like it should be an easy question: "What does a Product Manager do?" But, if you actually look at the company documents, there's no job description for "Product Manager", only related jobs like "Senior Manager of Product Management". Every other model, including the reasoning models, has still pretended that it could answer that question. For example, here's a response from o4-mini: However, the gpt-5 model realizes that it doesn't have the information necessary, and responds that it cannot answer the question: As I always say: I would much rather have an LLM admit that it doesn't have enough information instead of making up an answer. Bulk evaluation But that's just a single question! What we really need to know is whether the GPT-5 models will generally do a better job across the board, on a wide range of questions. So I ran bulk evaluations using the azure-ai-evaluations SDK, checking my favorite metrics: groundedness (LLM-judged), relevance (LLM-judged), and citations_matched (regex based off ground truth citations). I didn't bother evaluating gpt-5-nano, as I did some quick manual tests and wasn't impressed enough - plus, we've never used a nano sized model for our RAG scenarios. Here are the results for 50 Q/A pairs: metric stat gpt-4.1-mini gpt-4o-mini gpt-5-chat gpt-5 gpt-5-mini o3-mini groundedness pass % 94% 86% 96% 100% 🏆 94% 96% ↑ mean score 4.76 4.50 4.86 5.00 🏆 4.82 4.80 relevance pass % 94% 🏆 84% 90% 90% 74% 90% ↑ mean score 4.42 🏆 4.22 4.06 4.20 4.02 4.00 answer_length mean 829 919 549 844 940 499 latency mean 2.9 4.5 2.9 9.6 7.5 19.4 citations_matched % 52% 49% 52% 47% 49% 51% For the LLM-judged metrics of groundedness and relevance , the LLM awards a score of 1-5, and both 4 and 5 are considered passing scores. That's why you see both a "pass %" (percentage with 4 or 5 score) and an average score in the table above. For the groundedness metric, which measures whether an answer is grounded in the retrieved search results, the gpt-5 model does the best (100%), while the other gpt-5 models do quite well as well, on par with our current default model of gpt-4.1-mini. For the relevance metric, which measures whether an answer fully answers a question, the gpt-5 models don't score as highly as gpt-4.1-mini. I looked into the discrepancies there, and I think that's actually due to gpt-5 being less willing to give an answer when it's not fully confident in it - it would rather give a partial answer instead. That's a good thing for RAG apps, so I am comfortable with that metric being less than 100%. The latency metric is generally higher for the gpt-5 reasoning models, as would be expected, but is also variable based on on deployment region, region capacity, etc, assuming you're not using a "provisioned thoroughput" deployment. Also note that the latency here records the total time taken, from first token to last token, whereas the most important metric for a user-facing streaming chat is the time to first token. For the gpt-5 reasoning models in this evaluation, I set the reasoning_effort to "minimal", which means the model chooses whether to use reasoning tokens. I have never seen it actually use any reasoning tokens when I set the effort to minimal, so maybe that means that a higher reasoning effort is really only needed for longer or more complex tasks, and RAG answering is a non-reasoning task. A higher reasoning effort would definitely affect the latency and likely also affect the answer quality. I did not test that out, since the "minimal" effort setting already results in high quality answers. Answer differences Now that we've seen the overall metrics, let's dig into some actual answers, and see some of the key ways that GPT-5 model answers differ. Quicker to say "I don't know" 🤷🏻 Let's start with the first thing I noticed: gpt-5 was happy to admit when it couldn't fully answer a question. In the 50 questions evaluated, the gpt-5 model started off their answer with "I don't know" for 3 of the answers, while gpt-5-mini started that way for 10 of the answers. In table form: gpt-4.1-mini gpt-5 gpt-5-mini gpt-5-chat 1 (1.6%) 3 (6%) 10 (20%) 1 (1.6%) Here's an example of a gpt-4.1-mini answer versus a gpt-5 answer: Q: Explain the eligibility criteria for the Northwind Standard plan offered to Contoso employees? gpt-4.1-mini gpt-5 The eligibility criteria for the Northwind Standard plan for Contoso employees are not explicitly detailed in the provided sources. However, it is mentioned that prior-authorization may be required for certain services or treatments, which involves approval from Northwind Health based on medical necessity and coverage under the plan. Employees must have their physician submit a request for prior-authorization, and in some cases, meet specific criteria such as medical tests, treatments, or lifestyle changes to be approved. Preventive care, emergency services, mental health, and substance abuse coverage do not require prior-authorization. Out-of-network services do require prior-authorization and may incur additional costs [Northwind_Standard_Benefits_Details.pdf#page=63][Northwind_Standard_Benefits_Details.pdf#page=102]. If you need information on basic eligibility (such as employment status or tenure), that is not provided in these sources. I don’t know. The provided sources describe compliance (ERISA, COBRA, HIPAA, ACA) and note that Northwind Standard is a group plan covering “all eligible employees of Contoso and their spouses or dependents,” but they do not define the specific eligibility criteria (e.g., hours worked, waiting period, employment class). [Northwind_Standard_Benefits_Details.pdf#page=88][Northwind_Standard_Benefits_Details.pdf#page=85][Northwind_Standard_Benefits_Details.pdf#page=86] Both of the answers admit some degree of not-knowing, but gpt-5 comes right out with it at the start of the answer. Notably, our system prompt includes this instruction: Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. So what we may be seeing is a combination of two factors: 1) the gpt-5 reasoning models are generally better at following instructions, 2) the gpt-5 reasoning models went through a specific alignment process to reduce hallucination. Notably, however, the gpt-5-chat model rarely responded with "I don't know", so it is not as affected by whatever factors are increasing the likelihood of the reasoning models to answer that way. You may be wondering why I have questions in the evaluation set that aren't fully answerable - it's hard to generate a good set of ground truth for evaluation, and my ground truth data is not the optimal set of questions. Ideally, I would do a separate evaluation for all the fully knowable questions versus all the unknowable questions - and I even have a specific custom metric for "don't-know-ness" that I run on those - but a few partially knowable questions snuck into this ground truth data set. Requests for clarification 🙋🏻♀️ Our system prompt also encourages the model to ask clarifying questions when necessary: If asking a clarifying question to the user would help, ask the question. But I had never actually seen a clarifying question from a model until gpt-5-chat, which replied with a question to 2 of the 50 questions. Example: Q: What should I do if I need healthcare services outside Washington with Northwind Health? gpt-4.1-mini gpt-5-chat If you have Northwind Standard and need healthcare services outside Washington, you must meet these criteria for coverage: be outside Washington for no more than 12 consecutive months, be more than 100 miles away from your primary care provider, and have had your plan in effect for at least 6 months before receiving out-of-area care. You must contact Northwind Health in advance and obtain prior authorization for the care; otherwise, you may be responsible for the full cost. Emergency services outside Washington are covered, but non-emergency services are not unless these conditions are met [Northwind_Standard_Benefits_Details.pdf#page=59][Northwind_Standard_Benefits_Details.pdf#page=60]. If you have Northwind Health Plus, first ensure the provider is in the nationwide network by checking the online directory or calling customer service. Keep receipts and documentation for any out-of-state services, as you may need to submit claims for reimbursement. Contact Northwind Health Plus before receiving services to confirm coverage, as not all services outside Washington are covered [Northwind_Health_Plus_Benefits_Details.pdf#page=62]. Which plan do you have — Northwind Standard or Northwind Health Plus? Like in the example above, other models typically avoid asking a clarifying question by instead addressing every possible scenario (covering all their bases!), resulting in a fairly long answer. Arguably, it's a better user experience to ask the clarifying question, so that the user can get an answer tailored to their specific situation and not get confused by other possibilities. The primary drawback is the additional back-and-forth for the final answer. I did not personally see clarifying questions from the GPT-5 reasoning models, but my colleagues noticed an increase in such questions when using gpt-5 with GitHub Copilot Agent mode, so it's possible that every model in the family has an increased likelihood to ask for clarification when faced with ambiguity. Reasoning models ❤️ lists I saw a definite tendency of the GPT-5 reasoning models to output the answer as a list, even though there's nothing in the RAG system prompt that tells it to use lists. Out of the 50 questions evaluated, here are the number with answers formatted as a top-level bulleted list: gpt-4.1-mini gpt-5 gpt-5-mini gpt-5-chat 0 36 (72%) 26 (52%) 0 Here's an example of a gpt-4.1-mini answer (paragraph) versus a gpt-5 answer (list): Q: What are the coverage limitations of the Northwind Standard plan regarding clinical trials? gpt-4.1-mini gpt-5 The Northwind Standard plan covers certain clinical trial services such as diagnostic testing, treatment of the condition being studied, medications, lab services, and imaging services. However, it does not cover travel expenses associated with attending clinical trials. Additionally, any experimental treatments or services not part of the clinical trial are not covered. Coverage for other clinical trial services not explicitly listed may be considered on a case-by-case basis. Members should contact Northwind Health customer service for more details [Northwind_Standard_Benefits_Details.pdf#page=23][Northwind_Standard_Benefits_Details.pdf#page=24]. Travel expenses for attending clinical trials are not covered [Northwind_Standard_Benefits_Details.pdf#page=23][Northwind_Standard_Benefits_Details.pdf#page=24]. Experimental treatments or services that are not part of the clinical trial are not covered [Northwind_Standard_Benefits_Details.pdf#page=24]. Services or treatments deemed experimental or investigational are not covered by the plan [Northwind_Standard_Benefits_Details.pdf#page=53]. Now, is it a bad thing that the gpt-5 reasoning models use lists? Not necessarily! But if that's not the style you're looking for, then you either want to consider the gpt-5-chat model or add specific messaging in the system prompt to veer the model away from top level lists. Longer answers As we saw in overall metrics above, there was an impact of the answer length (measured in the number of characters, not tokens). Let's isolate those stats: gpt-4.1-mini gpt-5 gpt-5-mini gpt-5-chat 829 844 990 549 The gpt-5 reasoning models are generating answers of similar length to the current baseline of gpt-4.1-mini, though the gpt-5-mini model seems to be a bit more verbose. The API now has a new parameter to control verbosity for those models, which defaults to "medium". I did not try an evaluation with that set to "low" or "high", which would be an interesting evaluation to run. The gpt-5-chat model outputs relatively short answers, which are actually closer in length to the answer length that I used to see from gpt-3.5-turbo. What answer length is best? A longer answer will take longer to finish rendering to the user (even when streaming), and will cost the developer more tokens. However, sometimes answers are longer due to better formatting that is easier to skim, so longer does not always mean less readable. For the user-facing RAG chat scenario, I generally think that shorter answers are better. If I was putting these gpt-5 reasoning models in production, I'd probably try out the "low" verbosity value, or put instructions in the system prompt, so that users get their answers more quickly. They can always ask follow-up questions as needed. Fancy punctuation This is a weird difference that I discovered while researching the other differences: the GPT-5 models are more likely to use “smart” quotes instead of standard ASCII quotes. Specifically: Left single: ‘ (U+2018) Right single / apostrophe: ’ (U+2019) Left double: “ (U+201C) Right double: ” (U+201D) For example, the gpt-5 model actually responded with " I don’t know ", not with " I don't know ". It's a subtle difference, but if you are doing any sort of post-processing or analysis, it's good to know. I've also seen the models using the smart quotes incorrectly in coding contexts (like GitHub Copilot Agent mode), so that's another potential issue to look out for. I assumed that the models were trained on data that tended to use smart quotes more often, perhaps synthetic data or book text. I know that as a normal human, I rarely use them, given the extra effort required to type them. Query rewriting to the extreme Our RAG flow makes two LLM calls: the second answers the question, as you’d expect, but the first rewrites the user’s query into a strong search query. This step can fix spelling mistakes, but it’s even more important for filling in missing context in multi-turn conversations—like when the user simply asks, “what else?” A well-crafted rewritten query leads to better search results, and ultimately, a more complete and accurate answer. During my manual tests, I noticed that the rewritten queries from the GPT-5 models are much longer, filled to the brim with synonyms. For example: Q: What does a Product Manager do? gpt-4.1-mini gpt-5-mini product manager responsibilities Product Manager role responsibilities duties skills day-to-day tasks product management overview Are these new rewritten queries better or worse than the previous short ones? It's hard to tell, since they're just one factor in the overall answer output, and I haven't set up a retrieval-specific metric. The closest metric is citations_matched , since the new answer from the app can only match the citations in the ground truth if the app managed to retrieve all the same citations. That metric was generally high for these models, and when I looked into the cases where the citations didn't match, I typically thought the gpt-5 family of responses were still good answers. I suspect that the rewritten query does not have a huge effect either way, since our retrieval step uses hybrid search from Azure AI Search, and the combined power of both hybrid and vector search generally compensates for differences in search query wording. It's worth evaluating this further however, and considering using a different model for the query rewriting step. Developers often choose to use a smaller, faster model for that stage, since query rewriting is an easier task than answering a question. So, are the answers accurate? Even with a 100% groundedness score from an LLM judge, it's possible that a RAG app can be producing inaccurate answers, like if the LLM judge is biased or the retrieved context is incomplete. The only way to really know if RAG answers are accurate is to send them to a human expert. For the sample data in this blog, there is no human expert available, since they're based off synthetically generated documents. Despite two years of staring at those documents and running dozens of evaluations, I still am not an expert in the HR benefits of the fictional Contoso company. That's why I also ran the same evaluations on the same RAG codebase, but with data that I know intimately: my own personal blog. I looked through 200 answers from gpt-5, and did not notice any inaccuracies in the answers. Yes, there are times when it says "I don't know" or asks a clarifying question, but I consider those to be accurate answers, since they do not spread misinformation. I imagine that I could find some way to trick up the gpt-5 model, but on the whole, it looks like a model with a high likelihood of generating accurate answers when given relevant context. Evaluate for yourself! I share my evaluations on our sample RAG app as a way to share general learnings on model differences, but I encourage every developer to evaluate these models for your specific domain, alongside domain experts that can reason about the correctness of the answers. How can you evaluate? If you are using the same open source RAG project for Azure, deploy the GPT-5 models and follow the steps in the evaluation guide. If you have your own solution, you can use an open-source SDK for evaluating, like azure-ai-evaluation (the one that I use), DeepEval, promptfoo, etc. If you are using an observability platform like Langfuse, Arize, or Langsmith, they have evaluation strategies baked in. Or if you're using an agents framework like Pydantic AI, those also often have built-in eval mechanisms. If you can share what you learn from evaluations, please do! We are all learning about the strange new world of LLMs together.1.4KViews2likes0CommentsAzure Workbook for ACR tokens and their expiration dates
In this article, we will see how to monitor Azure Container Registry (ACR) tokens with their expiration dates. We will demonstrate how to do this using the Azure REST API: Registries - Tokens - List and an Azure Workbook. To obtain a list of Azure Container Registry (ACR) tokens and their expiration dates using the Azure Resource Manager API, we need to perform a series of REST API calls to authenticate and retrieve the necessary information. This process involves the following steps: Authenticate and obtain an access token. List ACR tokens. Get token credentials and expiration dates.Impariamo a conoscere MCP: Introduzione al Model Context Protocol (MCP)
Non perderti il prossimo evento “Let’s Learn – MCP” su Microsoft Reactor il 24 di Luglio, pensato per chiunque voglia conoscere meglio il nuovo standard per agenti intelligenti (il Model Context Protocol) e imparare a metterlo in pratica. La sessione è in Italiano e le demo sono in Python, ma fa parte di una serie di live-streaming disponibili in tantissime lingue.Strategic Solutions for Seamless Integration of Third-Party SaaS
Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlines a strategy centered around two foundational components: an Integration and Orchestration Layer, and a Messaging Platform. Together, these enable real-time communication, consistent data models, and scalable integration. While Talent Acquisition is the use case here, the architectural patterns apply broadly across domains. Whether you're embedding AI pipelines, managing edge deployments, or building platform services, thoughtful integration needs to be built into the foundation, not bolted on later.