observability
28 TopicsWant Safer, Smarter AI? Start with Observability in Azure AI Foundry
Observability in Azure AI: From Black Box to Transparent Intelligence If you are an AI developer or an engineer, you can benefit from Azure AI observability by gaining deep visibility into agent behavior, enabling them to trace decisions, evaluate response quality, and integrate automated testing into their workflows. This empowers you to build safer, more reliable GenAI applications. Responsible AI and compliance teams use observability tools to ensure transparency and accountability, leveraging audit logs, policy mapping, and risk scoring. These capabilities help organizations align AI development with ethical standards and regulatory requirements. Understanding Observability Imagine you're building a customer support chatbot using Azure AI. It’s designed to answer billing questions, troubleshoot issues, and escalate complex cases to human agents. Everything works well in testing—but once deployed, users start reporting confusing answers and slow response times. Without observability, you’re flying blind. You don’t know: Which queries are failing. Why the chatbot is choosing certain responses. Whether it's escalating too often or not enough. How latency and cost are trending over time. Enter Observability: With Azure AI Foundry and Azure Monitor, you can: Trace every interaction: See the full reasoning path the chatbot takes—from user input to model invocation to tool calls. Evaluate response quality: Automatically assess whether answers are grounded, fluent, and relevant. Monitor performance: Track latency, throughput, and cost per interaction. Detect anomalies: Use Azure Monitor’s ML-powered diagnostics to spot unusual patterns. Improve continuously: Feed evaluation results back into your CI/CD pipeline to refine the chatbot with every release. This is observability in action: turning opaque AI behavior into transparent, actionable insights. It’s not just about fixing bugs—it’s about building AI you can trust. Next, let’s understand more about observability: What Is Observability in Azure AI? Observability in Azure AI refers to the ability to monitor, evaluate, and govern AI agents and applications across their lifecycle—from model selection to production deployment. It’s not just about uptime or logs anymore. It’s about trust, safety, performance, cost, and compliance. Observability aligned with the end-to-end AI application development workflow. Image source: Microsoft Learn Key Components and Capabilities Azure AI Foundry Observability Built-in observability for agentic workflows. Tracks metrics like performance, quality, cost, safety, relevance, and “groundedness” in real time. Enables tracing of agent interactions and data lineage. Supports alerts for risky or off-policy responses and integrates with partner governance platforms. Find details on Observability here: Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn AI Red Teaming (PyRIT Integration) Scans agents for safety vulnerability. Evaluates attack success rates across categories like hate, violence, sexual content, and l more. Generates scorecards and logs results in the Foundry portal. Find details here: AI Red Teaming Agent - Azure AI Foundry | Microsoft Learn Image source: Microsoft Learn CI/CD Integration GitHub Actions and Azure DevOps workflows automate evaluations. Continuous monitoring and regression detection during development Azure Monitor + Azure BRAIN Uses ML and LLMs for anomaly detection, forecasting, and root cause analysis. Offers multi-tier log storage (Gold, Silver, Bronze) with unified KQL query experience. Integrates with Azure Copilot for diagnostics and optimization. Open Telemetry Extensions Azure is extending OTel with agent-specific entities like AgentRun, ToolCall, Eval, and ModelInvocation. Enables fleet-scale dashboards and semantic tracing for GenAI workloads. Observability as a First-Class Citizen in Azure AI Foundry In Azure AI Foundry, observability isn’t bolted on—it’s built in. The platform treats observability as a first-class capability, essential for building trustworthy, scalable, and responsible AI systems. Image source: Microsoft Learn What Does This Mean in Practice? Semantic Tracing for Agents Azure AI Foundry enables intelligent agents to perform tasks using AgentRun, ToolCall, and ModelInvocation. AgentRun manages the entire lifecycle of an agent's execution, from input processing to output generation. ToolCall allows agents to invoke external tools or APIs for specific tasks, like fetching data or performing calculations. ModelInvocation lets agents directly use AI models for advanced tasks, such as sentiment analysis or image recognition. Together, these components create adaptable agents capable of handling complex workflows efficiently. Integrated Evaluation Framework Developers can continuously assess agent responses for quality, safety, and relevance using built-in evaluators. These can be run manually or automatically via CI/CD pipelines, enabling fast iteration and regression detection. Governance and Risk Management Observability data feeds directly into governance workflows. Azure AI Foundry supports policy mapping, risk scoring, and audit logging, helping teams meet compliance requirements while maintaining agility. Feedback Loop for Continuous Improvement Observability isn’t just about watching—it’s about learning. Azure AI Foundry enables teams to use telemetry and evaluation data to refine agents, improve performance, and reduce risk over time. Now, Build AI You Can Trust Observability isn’t just a technical feature—it’s the foundation of responsible AI. Whether you're building copilots, deploying GenAI agents, or modernizing enterprise workflows, Azure AI Foundry and Azure Monitor give you the tools to trace, evaluate, and improve every decision your AI makes. Now is the time to move beyond black-box models and embrace transparency, safety, and performance at scale. Start integrating observability into your AI workflows and unlock the full potential of your agents—with confidence. Read more here: Plans | Microsoft Learn Observability and Continuous Improvement - Training | Microsoft Learn Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn About the Author Priyanka is a Technical Trainer at Microsoft USA with over 15 years of experience as a Microsoft Certified Trainer. She has a profound passion for learning and sharing knowledge across various domains. Priyanka excels in delivering training sessions, proctoring exams, and upskilling Microsoft Partners and Customers. She has significantly contributed to AI and Data-related courseware, exams, and high-profile events such as Microsoft Ignite, Microsoft Learn Live Shows, MCT Community AI Readiness, and Women in Cloud Skills Ready. Furthermore, she supports initiatives like “Code Without Barrier” and “Women in Azure AI,” contributing to AI Skills enhancements. Her primary areas of expertise include courses on Development, Data, and AI. In addition to maintaining and acquiring new certifications in Data and AI, she has also guided learners and enthusiasts on their educational journeys. Priyanka is an active member of the Microsoft Tech community, where she reviews and writes blogs focusing on Data and AI. #SkilledByMTT #MSLearn #MTTBloggingGroup69Views0likes0CommentsThe Future of AI: Reduce AI Provisioning Effort - Jumpstart your solutions with AI App Templates
In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI App template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. In this post, we answer that question with a closer look at Azure AI App templates - what they are, and how we can jumpstart our productivity with a reuse-and-extend approach that builds on open-source samples for core application architectures.478Views0likes0CommentsThe Future of AI: Harnessing AI agents for Customer Engagements
Discover how AI-powered agents are revolutionizing customer engagement—enhancing real-time support, automating workflows, and empowering human professionals with intelligent orchestration. Explore the future of AI-driven service, including Customer Assist created with Azure AI Foundry.721Views2likes0CommentsModel Mondays S2E12: Models & Observability
1. Weekly Highlights This week’s top news in the Azure AI ecosystem included: GPT Real Time (GA): Azure AI Foundry now offers GPT Real Time (GA)—lifelike voices, improved instruction following, audio fidelity, and function calling, with support for image context and lower pricing. Read the announcement and check out the model card for more details. Azure AI Translator API (Public Preview): Choose between fast Neural Machine Translation (NMT) or nuanced LLM-powered translations, with real-time flexibility for multilingual workflows. Read the announcement then check out the Azure AI Translator documentation for more details. Azure AI Foundry Agents Learning Plan: Build agents with autonomous goal pursuit, memory, collaboration, and deep fine-tuning (SFT, RFT, DPO) - on Azure AI Foundry. Read the announcement what Agentic AI involves - then follow this comprehensive learning plan with step-by-step guidance. CalcLM Agent Grid (Azure AI Foundry Labs): Project CalcLM: Agent Grid is a prototype and open-source experiment that illustrates how agents might live in a grid-like surface (like Excel). It's formula-first and lightweight - defining agentic workflows like calculations. Try the prototype and visit Foundry Labs to learn more. Agent Factory Blog: Observability in Agentic AI: Agentic AI tools and workflows are gaining rapid adoption in the enterprise. But delivering safe, reliable and performant agents requires foundation support for Observability. Read the 6-part Agent Factory series and check out the Top 5 agent observability best practices for reliable AI blog post for more details. 2. Spotlight On: Observability in Azure AI Foundry This week’s spotlight featured a deep dive and demo by Han Che (Senior PM, Core AI/ Microsoft ), showing observability end-to-end for agent workflows. Why Observability? Ensures AI quality, performance, and safety throughout the development lifecycle. Enables monitoring, root cause analysis, optimization, and governance for agents and models. Key Features & Demos: Development Lifecycle: Leaderboard: Pick the best model for your agent with real-time evaluation. Playground: Chat and prototype agents, view instant quality and safety metrics. Evaluators: Assess quality, risk, safety, intent resolution, tool accuracy, code vulnerability, and custom metrics. Governance: Integrate with partners like Cradle AI and SideDot for policy mapping and evidence archiving. Red Teaming Agent: Automatically test for vulnerabilities and unsafe behavior. CI/CD Integration: Automate evaluation in GitHub Actions and Azure DevOps pipelines. Azure DevOps GitHub Actions Monitoring Dashboard: Resource usage, application analytics, input/output tokens, request latency, cost breakdown, and evaluation scores. Azure Cost Management SDKs & Local Evaluation: Run evaluations locally or in the cloud with the Azure AI Evaluation SDK. Demo Highlights: Chat with a travel planning agent, view run metrics and tool usage. Drill into run details, debugging, and real-time safety/quality scores. Configure and run large-scale agent evaluations in CI/CD pipelines. Compare agents, review statistical analysis, and monitor in production dashboards 3. Customer Story: Saifr Saifr is a RegTech company that uses artificial intelligence to streamline compliance for marketing, communications, and creative teams in regulated industries. Incubated at Fidelity Labs (Fidelity Investments’ innovation arm), Saifr helps enterprises create, review, and approve content that meets regulatory standards—faster and with less manual effort. What Saifr Offers AI-Powered Compliance: Saifr’s platform leverages proprietary AI models trained on decades of regulatory expertise to automatically detect potential compliance risks in text, images, audio, and video. Automated Guardrails: The solution flags risky or non-compliant language, suggests compliant alternatives, and provides explanations—all in real time. Workflow Integration: Saifr seamlessly integrates with enterprise content creation and approval workflows, including cloud platforms and agentic AI systems like Azure AI Foundry. Multimodal Support: Goes beyond text to check images, videos, and audio for compliance risks, supporting modern marketing and communications teams. 4. Key Takeaways Observability is Essential: Azure AI Foundry offers complete monitoring, evaluation, tracing, and governance for agentic AI—making production safe, reliable, and compliant. Built-In Evaluation and Red Teaming: Use leaderboards, evaluators, and red teaming agents to assess and continuously improve model safety and quality. CI/CD and Dashboard Integration: Automate evaluations in GitHub Actions or Azure DevOps, then monitor and optimize agents in production with detailed dashboards. Compliance Made Easy: Safer’s agents and models help financial services and regulated industries proactively meet compliance standards for content and communications. Sharda's Tips: How I Wrote This Blog I focus on organizing highlights, summarizing customer stories, and linking to official Microsoft docs and real working resources. For this recap, I explored the Azure AI Foundry Observability docs, tested CI/CD pipeline integration, and watched the customer demo to share best practices for regulated industries. Here’s my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E12 based on the transcript and episode details. Focus on observability, agent dashboards, CI/CD, compliance, and customer stories. Add correct, working Microsoft links!" Coming Up Next Week Next week: Open Source Models! Join us for the final episode with Hugging Face VP of Product, live demos, and open model workflows. Register For The Livestream – Sep 15, 2025 About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.160Views0likes0CommentseBPF-Powered Observability Beyond Azure: A Multi-Cloud Perspective with Retina
Kubernetes simplifies container orchestration but introduces observability challenges due to dynamic pod lifecycles and complex inter-service communication. eBPF technology addresses these issues by providing deep system insights and efficient monitoring. The open-source Retina project leverages eBPF for comprehensive, cloud-agnostic network observability across AKS, GKE, and EKS, enhancing troubleshooting and optimization through real-world demo scenarios.879Views9likes0CommentsAutomating the Linux Quality Assurance with LISA on Azure
Introduction Building on the insights from our previous blog regarding how MSFT ensures the quality of Linux images, this article aims to elaborate on the open-source tools that are instrumental in securing exceptional performance, reliability, and overall excellence of virtual machines on Azure. While numerous testing tools are available for validating Linux kernels, guest OS images and user space packages across various cloud platforms, finding a comprehensive testing framework that addresses the entire platform stack remains a significant challenge. A robust framework is essential, one that seamlessly integrates with Azure's environment while providing the coverage for major testing tools, such as LTP and kselftest and covers critical areas like networking, storage and specialized workloads, including Confidential VMs, HPC, and GPU scenarios. This unified testing framework is invaluable for developers, Linux distribution providers, and customers who build custom kernels and images. This is where LISA (Linux Integration Services Automation) comes into play. LISA is an open-source tool specifically designed to automate and enhance the testing and validation processes for Linux kernels and guest OS images on Azure. In this blog, we will provide the history of LISA, its key advantages, the wide range of test cases it supports, and why it is an indispensable resource for the open-source community. Moreover, LISA is available under the MIT License, making it free to use, modify, and contribute. History of LISA LISA was initially developed as an internal tool by Microsoft to streamline the testing process of Linux images and kernel validations on Azure. Recognizing the value it could bring to the broader community, Microsoft open-sourced LISA, inviting developers and organizations worldwide to leverage and enhance its capabilities. This move aligned with Microsoft's growing commitment to open-source collaboration, fostering innovation and shared growth within the industry. LISA serves as a robust solution to validate and certify that Linux images meet the stringent requirements of modern cloud environments. By integrating LISA into the development and deployment pipeline, teams can: Enhance Quality Assurance: Catch and resolve issues early in the development cycle. Reduce Time to Market: Accelerate deployment by automating repetitive testing tasks. Build Trust with Users: Deliver stable and secure applications, bolstering user confidence. Collaborate and Innovate: Leverage community-driven improvements and share insights. Benefits of Using LISA Scalability: Designed to run large-scale test cases, from 1 test case to 10k test cases in one command. Multiple platform orchestration: LISA is created with modular design, to support run the same test cases on various platforms including Microsoft Azure, Windows HyperV, BareMetal, and other cloud-based platforms. Customization: Users can customize test cases, workflow, and other components to fit specific needs, allowing for targeted testing strategies. It’s like building kernels on-the-fly, sending results to custom database, etc. Community Collaboration: Being open source under the MIT License, LISA encourages community contributions, fostering continuous improvement and shared expertise. Extensive Test Coverage: It offers a rich suite of test cases covering various aspects of compatibility of Azure and Linux VMs, from kernel, storage, networking to middleware. How it works Infrastructure LISA is designed to be componentized and maximize compatibility with different distros. Test cases can focus only on test logic. Once test requirements (machines, CPU, memory, etc) are defined, just write the test logic without worrying about environment setup or stopping services on different distributions. Orchestration. LISA uses platform APIs to create, modify and delete VMs. For example, LISA uses Azure API to create VMs, run test cases, and delete VMs. During the test case running, LISA uses Azure API to collect serial log and can hot add/remove data disks. If other platforms implement the same serial log and data disk APIs, the test cases can run on the other platforms seamlessly. Ensure distro compatibility by abstracting over 100 commands in test cases, allowing focus on validation logic rather than distro compatibility. Pre-processing workflow assists in building the kernel on-the-fly, installing the kernel from package repositories, or modifying all test environments. Test matrix helps one run to test all. For example, one run can test different vm sizes on Azure, or different images, even different VM sizes and different images together. Anything is parameterizable, can be tested in a matrix. Customizable notifiers enable the saving of test results and files to any type of storage and database. Agentless and low dependency LISA operates test systems via SSH without requiring additional dependencies, ensuring compatibility with any system that supports SSH. Although some test cases require installing extra dependencies, LISA itself does not. This allows LISA to perform tests on systems with limited resources or even different operating systems. For instance, LISA can run on Linux, FreeBSD, Windows, and ESXi. Getting Started with LISA Ready to dive in? Visit the LISA project at aka.ms/lisa to access the documentation. Install: Follow the installation guide provided in the repository to set up LISA in your testing environment. Run: Follow the instructions to run LISA on local machine, Azure or existing systems. Extend: Follow the documents to extend LISA by test cases, data sources, tools, platform, workflow, etc. Join the Community: Engage with other users and contributors through forums and discussions to share experiences and best practices. Contribute: Modify existing test cases or create new ones to suit your needs. Share your contributions with the community to enhance LISA's capabilities. Conclusion LISA offers open-source collaborative testing solutions designed to operate across diverse environments and scenarios, effectively narrowing the gap between enterprise demands and community-led innovation. By leveraging LISA, customers can ensure their Linux deployments are reliable and optimized for performance. Its comprehensive testing capabilities, combined with the flexibility and support of an active community, make LISA an indispensable tool for anyone involved in Linux quality assurance and testing. Your feedback is invaluable, and we would greatly appreciate your insights.509Views1like0CommentsEffective Cloud Governance: Leveraging Azure Activity Logs with Power BI
We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times. One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log. This log contains a wealth of information ranging from noise to interesting to actionable data. One could set up alerts for delete and update signals however, that can result in a flood of notifications. To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log and allows you to group and summarize data by various dimensions. You still need someone to review the report regularly however consuming the data this way makes it a whole lot easier. This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment. If you're interested, this is the KQL query I'm using in Power Bi let start_time = ago(24h); let end_time = now(); AzureActivity | where TimeGenerated > start_time and TimeGenerated < end_time | where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE' | project TimeGenerated, Properties_d.resource, ResourceGroup, OperationNameValue, Authorization_d.scope, Authorization_d.action, Caller, CallerIpAddress, ActivityStatusValue | order by TimeGenerated asc54Views0likes0CommentsHow Microsoft Ensures the Quality of Linux VM Images and Platform Experiences on Azure?
In the continuously evolving landscape of cloud computing and AI, the quality and reliability of virtual machines (VMs) plays vital role for businesses running mission-critical workloads. With over 65% of Azure workloads running Linux our commitment to delivering high-quality Linux VM images and platforms remains unwavering. This involves overcoming unique challenges and implementing rigorous validation processes to ensure that every Linux VM image offered on Azure meets the high standards of quality and reliability. Ensuring the quality of Linux images and the overall platform experience on Azure involves addressing the challenges posed by a unique platform stack and the complexity of managing and validating multiple independent release cycles. High-quality Linux VMs are essential for ensuring consistent performance, minimizing downtime and regressions, and enhancing security by addressing vulnerabilities with timely updates. Figure 1: Complexity of Linux VMs in Azure VM Image Updates: Azure's Marketplace offers a diverse array of Linux distributions, each maintained by its respective publishers. These distributions release updates on their own schedules, independent of Azure's infrastructure updates. Package Updates: Within each Linux distribution, numerous packages are maintained and updated separately, adding another layer of complexity to the update and validation process. Extension and Agent Updates: Azure provides over 75+ guest VM extensions to enhance operating system capabilities, security, recovery etc. These extensions are updated independently, requiring careful validation to ensure compatibility and stability. Azure Infrastructure Updates: Azure regularly updates its underlying infrastructure, including components like Azure Boost, to improve reliability, performance, and security. VM SKUs and Sizes: Azure provides thousands of VM sizes with various combinations of CPU, memory, disk, and network configurations to meet diverse customer needs. Managing concurrent updates across all VMs poses significant QA challenges. To address this, Azure uses rigorous testing, gating and validation processes to ensure all components function reliably and meet customer expectations. Azure’s Approach to Overcoming Challenges To address these challenges, we have implemented a comprehensive validation strategy that involves testing at every stage of the image and kernel lifecycle. By adopting a shift-left approach, we execute Linux VM-specific test cases as early as possible. This strategy helps us catch failures close to the source of changes before they are deployed to Azure fleet. Our validation gates integrate with various entry points and provide coverage for a wide variety of scenarios on Azure. Upstream Kernel Validation: As a founding member of Kernel CI, Microsoft validates commits from Linux next and stable trees using Linux VMs in Azure and shares results with the community via Kernel CI DB. This enables us to detect regressions at early stages. Azure-Tuned Kernel Validation: Azure-Tuned Kernels provided by our endorsed distribution partners are thoroughly validated and signed off by Microsoft before it is released to the Azure fleet. Linux Guest Image Validation: The quality team works with endorsed distribution partners for major releases to conduct thorough validation. Each refreshed image, including those from third-party publishers, is validated and certified before being added to the marketplace. Automated pipelines are in place to validate the images once they are available in the Marketplace. Package Validation: Unattended Update: We conduct validation of packages updates with target distro to prevent regression and ensure that only tested snapshots are utilized for updating Linux VM in Azure. Guest Extension Validation: Every Azure-provided extensions undergoes Basic Validation Testing (BVT) across all images and kernel versions to ensure compatibility and functionality amidst any changes. Additionally, comprehensive release testing is conducted for major releases to maintain reliability and compatibility. New VM SKU Validation: Any new VM SKU undergoes validation to confirm it supports Linux before its release to the Azure fleet. This process includes functionality, performance and stress testing across various Linux distributions, and compatibility tests with existing Linux images in the fleet. Azure HostOS & Host Agent Validation: Updates to the Azure Host OS & Agents are thoroughly tested from the Linux guest OS perspective to confirm that changes in the Azure host environment do not result in regressions in compatibility, performance, or stability for Linux VMs. At any stage where regressions or bugs are identified, we block those releases to ensure they never reach customers. All issues are resolved and rigorously retested before images, kernels, or extension updates are made available. Through these robust validation processes, Azure ensures that Linux VMs consistently deliver to customer expectations, delivering a reliable, secure, and high-performance environment for mission-critical workloads. Validation Tools for VM Guest Images and Kernel To ensure the quality and reliability of Linux VM images and kernels on Azure, we leverage open-source kernel testing frameworks like LTP, kselftest, and fstest, along with extensive Azure-specific test cases available in LISA, to comprehensively validate all aspects of the platforms. LISA (Linux Integration Services Automation): Microsoft is committed to open source and that is no different with our testing framework LISA. LISA is an open-source core testing framework designed to meet all Linux validation needs. It includes over 400 tests covering performance, features and security, ensuring comprehensive validation of Linux images on Azure. By automating diverse test scenarios, LISA enables early detection and resolution of issues, enhancing the stability and performance of Linux VMs. Conclusion At Azure, Linux quality is a fundamental aspect of our commitment to delivering reliable VM images and platforms. Through comprehensive testing and strong collaboration with Linux distribution partners, we ensure quality and reliability of VMs while proactively identifying and resolving potential issues. This approach allows us to continually refine our processes and maintain the quality that customers expect from Azure. Quality is a core focus, and we remain dedicated to continuous improvement, delivering world-class Linux environments to businesses and customers. For us, quality is not just a priority—it’s our standard. Your feedback is invaluable, and we would greatly appreciate your insights.696Views0likes0Comments