observability
32 TopicsProject Pavilion Presence at KubeCon NA 2025
KubeCon + CloudNativeCon NA took place in Atlanta, Georgia, from 10-13 November, and continued to highlight the ongoing growth of the open source, cloud-native community. Microsoft participated throughout the event and supported several open source projects in the Project Pavilion. Microsoft’s involvement reflected our commitment to upstream collaboration, open governance, and enabling developers to build secure, scalable and portable applications across the ecosystem. The Project Pavilion serves as a dedicated, vendor-neutral space on the KubeCon show floor reserved for CNCF projects. Unlike the corporate booths, it focuses entirely on open source collaboration. It brings maintainers and contributors together with end users for hands-on demos, technical discussions, and roadmap insights. This space helps attendees discover emerging technologies and understand how different projects fit into the cloud-native ecosystem. It plays a critical role for idea exchanges, resolving challenges and strengthening collaboration across CNCF approved technologies. Why Our Presence Matters KubeCon NA remains one of the most influential gatherings for developers and organizations shaping the future of cloud-native computing. For Microsoft, participating in the Project Pavilion helps advance our goals of: Open governance and community-driven innovation Scaling vital cloud-native technologies Secure and sustainable operations Learning from practitioners and adopters Enabling developers across clouds and platforms Many of Microsoft’s products and cloud services are built on or aligned with CNCF and open-source technologies. Being active within these communities ensures that we are contributing back to the ecosystem we depend on and designing by collaborating with the community, not just for it. Microsoft-Supported Pavilion Projects containerd Representative: Wei Fu The containerd team engaged with project maintainers and ecosystem partners to explore solutions for improving AI model workflows. A key focus was the challenge of handling large OCI artifacts (often 500+ GiB) used in AI training workloads. Current image-pulling flows require containerd to fetch and fully unpack blobs, which significantly delays pod startup for large models. Collaborators from Docker, NTT, and ModelPack discussed a non-unpacking workflow that would allow training workloads to consume model data directly. The team plans to prototype this behavior as an experimental feature in containerd. Additional discussions included updates related to nerdbox and next steps for the erofs snapshotter. Copacetic Representative: Joshua Duffney The Copa booth attracted roughly 75 attendees, with strong representation from federal agencies and financial institutions, a sign of growing adoption in regulated industries. A lightning talk delivered at the conference significantly boosted traffic and engagement. Key feedback and insights included: High interest in customizable package update sources Demand for application-level patching beyond OS-level updates Need for clearer CI/CD integration patterns Expectations around in-cluster image patching Questions about runtime support, including Podman The conversations revealed several documentation gaps and feature opportunities that will inform Copa’s roadmap and future enablement efforts. Drasi Representative: Nandita Valsan KubeCon NA 2025 marked Drasi’s first in-person presence since its launch in October 2024 and its entry into the CNCF Sandbox in early 2025. With multiple kiosk slots, the team interacted with ~70 visitors across shifts. Engagement highlights included: New community members joining the Drasi Discord and starring GitHub repositories Meaningful discussions with observability and incident management vendors interested in change-driven architectures Positive reception to Aman Singh’s conference talk, which led attendees back to the booth for deeper technical conversations Post-event follow-ups are underway with several sponsors and partners to explore collaboration opportunities. Flatcar Container Linux Representatives: Sudhanva Huruli and Vamsi Kavuru The Flatcar project had some fantastic conversations at the pavilion. Attendees were eager to learn about bare metal provisioning, GPU support for AI workloads, and how Flatcar’s fully automated build and test process keeps things simple and developer friendly. Questions around Talos vs. Flatcar and CoreOS sparked lively discussions, with the team emphasizing Flatcar’s usability and independence from an OS-level API. Interest came from government agencies and financial institutions, and the preview of Flatcar on AKS opened the door to deeper conversations about real-world adoption. The Project Pavilion proved to be the perfect venue for authentic, technical exchanges. Headlamp Representatives: Joaquim Rocha, Will Case, and Oleksandr Dubenko Headlamp had a booth for all three days of the conference, engaging with both longstanding users and first-time attendees. The increased visibility from becoming a Kubernetes sub-project was evident, with many attendees sharing their usage patterns across large tech organizations and smaller industrial teams. The booth enabled maintainers to: Gather insights into how teams use Headlamp in different environments Introduce Headlamp to new users discovering it via talks or hallway conversations Build stronger connections with the community and understand evolving needs Inspektor Gadget Representatives: Jose Blanquicet and Mauricio Vásquez Bernal Hosting a half-day kiosk session, Inspektor Gadget welcomed approximately 25 visitors. Attendees included newcomers interested in learning the basics and existing users looking for updates. The team showcased new capabilities, including the tcpdump gadget and Prometheus metrics export, and invited visitors to the upcoming contribfest to encourage participation. Istio Representatives: Keith Mattix, Jackie Maertens, Steven Jin Xuan, Niranjan Shankar, and Mike Morris The Istio booth continued to attract a mix of experienced adopters and newcomers seeking guidance. Technical discussions focused on: Enhancements to multicluster support in ambient mode Migration paths from sidecars to ambient Improvements in Gateway API availability and usage Performance and operational benefits for large-scale deployments Users, including several Azure customers, expressed appreciation for Microsoft’s sustained investment in Istio as part of their service mesh infrastructure. Notary Project Representative: Feynman Zhou, Yi Zha and Toddy Mladenov The Notary Project booth saw significant interest from practitioners concerned with software supply chain security. Attendees discussed signing, verification workflows, and integrations with Azure services and Kubernetes clusters. The conversations will influence upcoming improvements across Notary Project and Ratify, reinforcing Microsoft’s commitment to secure artifacts and verifiable software distribution. Open Policy Agent (OPA) - Gatekeeper Representative: Jaydip Gabani The OPA/Gatekeeper booth enabled maintainers to connect with both new and existing users to explore use cases around policy enforcement, Rego/CEL authoring, and managing large policy sets. Many conversations surfaced opportunities around simplifying best practices and reducing management complexity. The team also promoted participation in an ongoing Gatekeeper/OPA survey to guide future improvements. ORAS Representative: Feynman Zhou, Yi Zha and Toddy Mladenov ORAS engaged developers interested in OCI artifacts beyond container images which includes AI/ML models, metadata, backups, and multi-cloud artifact workflows. Attendees appreciated ORAS’s ecosystem integrations and found the booth examples useful for understanding how artifacts are tagged, packaged, and distributed. Many users shared how they leverage ORAS with Azure Container Registry and other OCI-compatible registries. Conclusion KubeCon + CloudNativeCon North America 2025 reinforced the essential role of open source communities in driving innovation across cloud native technologies. Through the Project Pavilion, Microsoft teams were able to exchange knowledge with other maintainers, gather user feedback, and support projects that form foundational components of modern cloud infrastructure. Microsoft remains committed to building alongside the community and strengthening the ecosystem that powers so much of today’s cloud-native development. For anyone interested in exploring or contributing to these open source efforts, please reach out directly to each project’s community to get involved, or contact Lexi Nadolski at lexinadolski@microsoft.com for more information.75Views0likes0CommentsFoundry Agent Service at Ignite 2025: Simple to Build. Powerful to Deploy. Trusted to Operate.
The upgraded Foundry Agent Service delivers a unified, simplified platform with managed hosting, built-in memory, tool catalogs, and seamless integration with Microsoft Agent Framework. Developers can now deploy agents faster and more securely, leveraging one-click publishing to Microsoft 365 and advanced governance features for streamlined enterprise AI operations.4KViews2likes1CommentFrom Policy to Practice: Built-In CIS Benchmarks on Azure - Flexible, Hybrid-Ready
Security is more important than ever. The industry-standard for secure machine configuration is the Center for Internet Security (CIS) Benchmarks. These benchmarks provide consensus-based prescriptive guidance to help organizations harden diverse systems, reduce risk, and streamline compliance with major regulatory frameworks and industry standards like NIST, HIPAA, and PCI DSS. In our previous post, we outlined our plans to improve the Linux server compliance and hardening experience on Azure and shared a vision for integrating CIS Benchmarks. Today, that vision has turned into reality. We're now announcing the next phase of this work: Center for Internet Security (CIS) Benchmarks are now available on Azure for all Azure endorsed distros, at no additional cost to Azure and Azure Arc customers. With today's announcement, you get access to the CIS Benchmarks on Azure with full parity to what’s published by the Center for Internet Security (CIS). You can adjust parameters or define exceptions, tailoring security to your needs and applying consistent controls across cloud, hybrid, and on-premises environments - without having to implement every control manually. Thanks to this flexible architecture, you can truly manage compliance as code. How we achieve parity To ensure accuracy and trust, we rely on and ingest CIS machine-readable Benchmark content (OVAL/XCCDF files) as the source of truth. This guarantees that the controls and rules you apply in Azure match the official CIS specifications, reducing drift and ensuring compliance confidence. What’s new under the hood At the core of this update is azure-osconfig’s new compliance engine - a lightweight, open-source module developed by the Azure Core Linux team. It evaluates Linux systems directly against industry-standard benchmarks like CIS, supporting both audit and, in the future, auto-remediation. This enables accurate, scalable compliance checks across large Linux fleets. Here you can read more about azure-osconfig. Dynamic rule evaluation The new compliance engine supports simple fact-checking operations, evaluation of logic operations on them (e.g., anyOf, allOf) and Lua based scripting, which allows to express complex checks required by the CIS Critical Security Controls - all evaluated natively without external scripts. Scalable architecture for large fleets When the assignment is created, the Azure control plane instructs the machine to pull the latest Policy package via the Machine Configuration agent. Azure-osconfig’s compliance engine is integrated as a light-weight library to the package and called by Machine Configuration agent for evaluation – which happens every 15-30minutes. This ensures near real-time compliance state without overwhelming resources and enables consistent evaluation across thousands of VMs and Azure Arc-enabled servers. Future-ready for remediation and enforcement While the Public Preview starts with audit-only mode, the roadmap includes per-rule remediation and enforcement using technologies like eBPF for kernel-level controls. This will allow proactive prevention of configuration drift and runtime hardening at scale. Please reach out if you interested in auto-remediation or enforcement. Extensibility beyond CIS Benchmarks The architecture was designed to support other security and compliance standards as well and isn’t limited to CIS Benchmarks. The compliance engine is modular, and we plan to extend the platform with STIG and other relevant industry benchmarks. This positions Azure as a platform for a place where you can manage your compliance from a single control-plane without duplicating efforts elsewhere. Collaboration with the CIS This milestone reflects a close collaboration between Microsoft and the CIS to bring industry-standard security guidance into Azure as a built-in capability. Our shared goal is to make cloud-native compliance practical and consistent, while giving customers the flexibility to meet their unique requirements. We are committed to continuously supporting new Benchmark releases, expanding coverage with new distributions and easing adoption through built-in workflows, such as moving from your current Benchmark version to a new version while preserving your custom configurations. Certification and trust We can proudly announce that azure-osconfig has met all the requirements and is officially certified by the CIS for Benchmark assessment, so you can trust compliance results as authoritative. Minor benchmark updates will be applied automatically, while major version will be released separately. We will include workflows to help migrate customizations seamlessly across versions. Key Highlights Built-in CIS Benchmarks for Azure Endorsed Linux distributions Full parity with official CIS Benchmarks content and certified by the CIS for Benchmark Assessment Flexible configuration: adjust parameters, define exceptions, tune severity Hybrid support: enforce the same baseline across Azure, on-prem, and multi-cloud with Azure Arc Reporting format in CIS tooling style Supported use cases Certified CIS Benchmarks for all Azure Endorsed Distros - Audit only (L1/L2 server profiles) Hybrid / On-premises and other cloud machines with Azure Arc for the supported distros Compliance as Code (example via Github -> Azure OIDC auth and API integration) Compatible with GuestConfig workbook What’s next? Our next mission is to bring the previously announced auto-remediation capability into this experience, expand the distribution coverage and elevate our workflows even further. We’re focused on empowering you to resolve issues while honoring the unique operational complexity of your environments. Stay tuned! Get Started Documentation link for this capability Enable CIS Benchmarks in Machine Configuration and select the “Official Center for Internet Security (CIS) Benchmarks for Linux Workloads” then select the distributions for your assignment, and customize as needed. In case if you want any additional distribution supported or have any feedback for azure-osconfig – please open an Azure support case or a Github issue here Relevant Ignite 2025 session: Hybrid workload compliance from policy to practice on Azure Connect with us at Ignite Meet the Linux team and stop by the Linux on Azure booth to see these innovations in action: Session Type Session Code Session Name Date/Time (PST) Theatre THR 712 Hybrid workload compliance from policy to practice on Azure Tue, Nov 18/ 3:15 PM – 3:45 PM Breakout BRK 143 Optimizing performance, deployments, and security for Linux on Azure Thu, Nov 20/ 1:00 PM – 1:45 PM Breakout BRK 144 Build, modernize, and secure AKS workloads with Azure Linux Wed, Nov 19/ 1:30 PM – 2:15 PM Breakout BRK 104 From VMs and containers to AI apps with Azure Red Hat OpenShift Thu, Nov 20/ 8:30 AM – 9:15 AM Theatre THR 701 From Container to Node: Building Minimal-CVE Solutions with Azure Linux Wed, Nov 19/ 3:30 PM – 4:00 PM Lab Lab 505 Fast track your Linux and PostgreSQL migration with Azure Migrate Tue, Nov 18/ 4:30 PM – 5:45 PM PST Wed, Nov 19/ 3:45 PM – 5:00 PM PST Thu, Nov 20/ 9:00 AM – 10:15 AM PST597Views0likes0CommentsObservability for Multi-Agent Systems with Microsoft Agent Framework and Azure AI Foundry
Agentic applications are revolutionizing enterprise automation, but their dynamic toolchains and latent reasoning make them notoriously hard to operate. In this post, you'll learn how to instrument a Microsoft Agent Framework–based service with OpenTelemetry, ship traces to Azure AI Foundry observability, and adopt a practical workflow to debug, evaluate, and improve multi-agent behavior in production. We'll show how to wire spans around reasoning steps and tool calls (OpenAPI / MCP), enabling deep visibility into your agentic workflows. Who Should Read This? Developers building agents with Microsoft Agent Framework (MAF) in .NET or Python Architects/SREs seeking enterprise-grade visibility, governance, and reliability for deployments on Azure AI Foundry Why Observability Is Non-Negotiable for Agents Traditional logs fall short for agentic systems: Reasoning and routing (which tool? which doc?) are opaque without explicit spans/events Failures often occur between components (e.g., retrieval mismatch, tool schema drift) Without traces across agents ⇄ tools ⇄ data stores, you can't reproduce or evaluate behavior Microsoft has introduced multi-agent observability patterns and OpenTelemetry (OTel) conventions that unify traces across Agent Framework, Foundry, and popular stacks—so you can see one coherent timeline for each task. Reference Architecture Key Capabilities Agent orchestration & deployment via Microsoft Agent Framework Model access using Foundry’s OpenAI-compatible endpoint OpenTelemetry for traces/spans + attributes (agent, tool, retrieval, latency, tokens) Step-by-Step Implementation Assumption: This article uses Azure Monitor (via Application Insights) as the OpenTelemetry exporter, but you can configure other supported exporters in the same way. Prerequisites .NET 8 SDK or later Azure OpenAI service (endpoint, API key, deployed model) Application Insights and Grafana Create an Agent with OpenTelemetry (ASP.NET Core or Console App) Install required packages: dotnet add package Azure.AI.OpenAI dotnet add package Azure.Monitor.OpenTelemetry.Exporter dotnet add package Microsoft.Agents.AI.OpenAI dotnet add package Microsoft.Extensions.Logging dotnet add package OpenTelemetry dotnet add package OpenTelemetry.Trace dotnet add package OpenTelemetry.Metrics dotnet add package OpenTelemetry.Extensions.Hosting dotnet add package OpenTelemetry.Instrumentation.Http Setup environment variables: AZURE_OPENAI_ENDPOINT: https://<your_service_name>.openai.azure.com/ AZURE_OPENAI_API_KEY: <your_azure_openai_apikey> APPLICATIONINSIGHTS_CONNECTION_STRING: <your_application_insights_connectionstring_for_azuremonitor_exporter> Configure tracing once at startup: var applicationInsightsConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING"); // Create a resource describing the service var resource = ResourceBuilder.CreateDefault() .AddService(serviceName: ServiceName) .AddAttributes(new Dictionary<string, object> { ["deployment.environment"] = "development", ["service.instance.id"] = Environment.MachineName }) .Build(); // Setup OpenTelemetry TracerProvider var traceProvider = Sdk.CreateTracerProviderBuilder() .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(ServiceName)) .AddSource(SourceName) .AddSource("Microsoft.Agents.AI") .AddHttpClientInstrumentation() .AddAzureMonitorTraceExporter(options => { options.ConnectionString = applicationInsightsConnectionString; }) .Build(); // Setup OpenTelemetry MeterProvider var meterProvider = Sdk.CreateMeterProviderBuilder() .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(ServiceName)) .AddMeter(SourceName) .AddAzureMonitorMetricExporter(options => { options.ConnectionString = applicationInsightsConnectionString; }) .Build(); // Configure DI and OpenTelemetry var serviceCollection = new ServiceCollection(); // Setup Logging with OpenTelemetry and Application Insights serviceCollection.AddLogging(loggingBuilder => { loggingBuilder.SetMinimumLevel(LogLevel.Debug); loggingBuilder.AddOpenTelemetry(options => { options.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(ServiceName)); options.IncludeScopes = true; options.IncludeFormattedMessage = true; options.AddAzureMonitorLogExporter(exporterOptions => { exporterOptions.ConnectionString = applicationInsightsConnectionString; }); }); loggingBuilder.AddApplicationInsights( configureTelemetryConfiguration: (config) => { config.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING"); }, configureApplicationInsightsLoggerOptions: options => { options.TrackExceptionsAsExceptionTelemetry = true; options.IncludeScopes = true; }); }); Configure custom metrics and activity source for tracing: using var activitySource = new ActivitySource(SourceName); using var meter = new Meter(SourceName); // Create custom metrics var interactionCounter = meter.CreateCounter<long>("chat_interactions_total", description: "Total number of chat interactions"); var responseTimeHistogram = meter.CreateHistogram<double>("chat_response_time_ms", description: "Chat response time in milliseconds"); 2. Wire-up the AI Agent: // Create OpenAI client var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"); var apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY"); var deploymentName = "gpt-4o-mini"; using var client = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(apiKey)) .GetChatClient(deploymentName) .AsIChatClient() .AsBuilder() .UseOpenTelemetry(sourceName: SourceName, configure: (cfg) => cfg.EnableSensitiveData = true) .Build(); logger.LogInformation("Creating Agent with OpenTelemetry instrumentation"); // Create AI Agent var agent = new ChatClientAgent( client, name: "AgentObservabilityDemo", instructions: "You are a helpful assistant that provides concise and informative responses.") .AsBuilder() .UseOpenTelemetry(SourceName, configure: (cfg) => cfg.EnableSensitiveData = true) .Build(); var thread = agent.GetNewThread(); logger.LogInformation("Agent created successfully with ID: {AgentId}", agent.Id); 3. Instrument Agent logic with semantic attributes and call OpenAI-compatible API: // Create a parent span for the entire agent session using var sessionActivity = activitySource.StartActivity("Agent Session"); Console.WriteLine($"Trace ID: {sessionActivity?.TraceId} "); var sessionId = Guid.NewGuid().ToString("N"); sessionActivity? .SetTag("agent.name", "AgentObservabilityDemo") .SetTag("session.id", sessionId) .SetTag("session.start_time", DateTimeOffset.UtcNow.ToString("O")); logger.LogInformation("Starting agent session with ID: {SessionId}", sessionId); using (logger.BeginScope(new Dictionary<string, object> { ["SessionId"] = sessionId, ["AgentName"] = "AgentObservabilityDemo" })) { var interactionCount = 0; while (true) { Console.Write("You (or 'exit' to quit): "); var input = Console.ReadLine(); if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase)) { logger.LogInformation("User requested to exit the session"); break; } interactionCount++; logger.LogInformation("Processing interaction #{InteractionCount}", interactionCount); // Create a child span for each individual interaction using var activity = activitySource.StartActivity("Agent Interaction"); activity? .SetTag("user.input", input) .SetTag("agent.name", "AgentObservabilityDemo") .SetTag("interaction.number", interactionCount); var stopwatch = Stopwatch.StartNew(); try { logger.LogInformation("Starting agent execution for interaction #{InteractionCount}", interactionCount); var response = await agent.RunAsync(input); Console.WriteLine($"Agent: {response}"); Console.WriteLine(); stopwatch.Stop(); var responseTimeMs = stopwatch.Elapsed.TotalMilliseconds; // Record metrics interactionCounter.Add(1, new KeyValuePair<string, object?>("status", "success")); responseTimeHistogram.Record(responseTimeMs, new KeyValuePair<string, object?>("status", "success")); activity?.SetTag("interaction.status", "success"); logger.LogInformation("Agent interaction #{InteractionNumber} completed successfully in {ResponseTime:F2} seconds", interactionCount, responseTimeMs); } catch (Exception ex) { Console.WriteLine($"Error: {ex.Message}"); Console.WriteLine(); stopwatch.Stop(); var responseTimeMs = stopwatch.Elapsed.TotalSeconds; // Record error metrics interactionCounter.Add(1, new KeyValuePair<string, object?>("status", "error")); responseTimeHistogram.Record(responseTimeMs, new KeyValuePair<string, object?>("status", "error")); activity? .SetTag("response.success", false) .SetTag("error.message", ex.Message) .SetStatus(ActivityStatusCode.Error, ex.Message); logger.LogError(ex, "Agent interaction #{InteractionNumber} failed after {ResponseTime:F2} seconds: {ErrorMessage}", interactionCount, responseTimeMs, ex.Message); } } // Add session summary to the parent span sessionActivity? .SetTag("session.total_interactions", interactionCount) .SetTag("session.end_time", DateTimeOffset.UtcNow.ToString("O")); logger.LogInformation("Agent session completed. Total interactions: {TotalInteractions}", interactionCount); Azure Monitor dashboard Once you run the agent and generate some traffic, your dashboard in Azure Monitor will be populated as shown below: You can drill down to specific service / activity source / spans by applying relevant filters: Key Features Demonstrated OpenTelemetry instrumentation with Microsoft Agent framework Custom metrics for user interactions End-to-end Telemetry correlation Real time telemetry visualization along with metrics and logging interactions Further reading Introducing Microsoft Agent Framework Azure AI Foundry docs OpenTelemetry Aspire Demo with Azure OpenAI939Views3likes0CommentsWant Safer, Smarter AI? Start with Observability in Azure AI Foundry
Observability in Azure AI: From Black Box to Transparent Intelligence If you are an AI developer or an engineer, you can benefit from Azure AI observability by gaining deep visibility into agent behavior, enabling them to trace decisions, evaluate response quality, and integrate automated testing into their workflows. This empowers you to build safer, more reliable GenAI applications. Responsible AI and compliance teams use observability tools to ensure transparency and accountability, leveraging audit logs, policy mapping, and risk scoring. These capabilities help organizations align AI development with ethical standards and regulatory requirements. Understanding Observability Imagine you're building a customer support chatbot using Azure AI. It’s designed to answer billing questions, troubleshoot issues, and escalate complex cases to human agents. Everything works well in testing—but once deployed, users start reporting confusing answers and slow response times. Without observability, you’re flying blind. You don’t know: Which queries are failing. Why the chatbot is choosing certain responses. Whether it's escalating too often or not enough. How latency and cost are trending over time. Enter Observability: With Azure AI Foundry and Azure Monitor, you can: Trace every interaction: See the full reasoning path the chatbot takes—from user input to model invocation to tool calls. Evaluate response quality: Automatically assess whether answers are grounded, fluent, and relevant. Monitor performance: Track latency, throughput, and cost per interaction. Detect anomalies: Use Azure Monitor’s ML-powered diagnostics to spot unusual patterns. Improve continuously: Feed evaluation results back into your CI/CD pipeline to refine the chatbot with every release. This is observability in action: turning opaque AI behavior into transparent, actionable insights. It’s not just about fixing bugs—it’s about building AI you can trust. Next, let’s understand more about observability: What Is Observability in Azure AI? Observability in Azure AI refers to the ability to monitor, evaluate, and govern AI agents and applications across their lifecycle—from model selection to production deployment. It’s not just about uptime or logs anymore. It’s about trust, safety, performance, cost, and compliance. Observability aligned with the end-to-end AI application development workflow. Image source: Microsoft Learn Key Components and Capabilities Azure AI Foundry Observability Built-in observability for agentic workflows. Tracks metrics like performance, quality, cost, safety, relevance, and “groundedness” in real time. Enables tracing of agent interactions and data lineage. Supports alerts for risky or off-policy responses and integrates with partner governance platforms. Find details on Observability here: Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn AI Red Teaming (PyRIT Integration) Scans agents for safety vulnerability. Evaluates attack success rates across categories like hate, violence, sexual content, and l more. Generates scorecards and logs results in the Foundry portal. Find details here: AI Red Teaming Agent - Azure AI Foundry | Microsoft Learn Image source: Microsoft Learn CI/CD Integration GitHub Actions and Azure DevOps workflows automate evaluations. Continuous monitoring and regression detection during development Azure Monitor + Azure BRAIN Uses ML and LLMs for anomaly detection, forecasting, and root cause analysis. Offers multi-tier log storage (Gold, Silver, Bronze) with unified KQL query experience. Integrates with Azure Copilot for diagnostics and optimization. Open Telemetry Extensions Azure is extending OTel with agent-specific entities like AgentRun, ToolCall, Eval, and ModelInvocation. Enables fleet-scale dashboards and semantic tracing for GenAI workloads. Observability as a First-Class Citizen in Azure AI Foundry In Azure AI Foundry, observability isn’t bolted on—it’s built in. The platform treats observability as a first-class capability, essential for building trustworthy, scalable, and responsible AI systems. Image source: Microsoft Learn What Does This Mean in Practice? Semantic Tracing for Agents Azure AI Foundry enables intelligent agents to perform tasks using AgentRun, ToolCall, and ModelInvocation. AgentRun manages the entire lifecycle of an agent's execution, from input processing to output generation. ToolCall allows agents to invoke external tools or APIs for specific tasks, like fetching data or performing calculations. ModelInvocation lets agents directly use AI models for advanced tasks, such as sentiment analysis or image recognition. Together, these components create adaptable agents capable of handling complex workflows efficiently. Integrated Evaluation Framework Developers can continuously assess agent responses for quality, safety, and relevance using built-in evaluators. These can be run manually or automatically via CI/CD pipelines, enabling fast iteration and regression detection. Governance and Risk Management Observability data feeds directly into governance workflows. Azure AI Foundry supports policy mapping, risk scoring, and audit logging, helping teams meet compliance requirements while maintaining agility. Feedback Loop for Continuous Improvement Observability isn’t just about watching—it’s about learning. Azure AI Foundry enables teams to use telemetry and evaluation data to refine agents, improve performance, and reduce risk over time. Now, Build AI You Can Trust Observability isn’t just a technical feature—it’s the foundation of responsible AI. Whether you're building copilots, deploying GenAI agents, or modernizing enterprise workflows, Azure AI Foundry and Azure Monitor give you the tools to trace, evaluate, and improve every decision your AI makes. Now is the time to move beyond black-box models and embrace transparency, safety, and performance at scale. Start integrating observability into your AI workflows and unlock the full potential of your agents—with confidence. Read more here: Plans | Microsoft Learn Observability and Continuous Improvement - Training | Microsoft Learn Observability in Generative AI with Azure AI Foundry - Azure AI Foundry | Microsoft Learn About the Author Priyanka is a Technical Trainer at Microsoft USA with over 15 years of experience as a Microsoft Certified Trainer. She has a profound passion for learning and sharing knowledge across various domains. Priyanka excels in delivering training sessions, proctoring exams, and upskilling Microsoft Partners and Customers. She has significantly contributed to AI and Data-related courseware, exams, and high-profile events such as Microsoft Ignite, Microsoft Learn Live Shows, MCT Community AI Readiness, and Women in Cloud Skills Ready. Furthermore, she supports initiatives like “Code Without Barrier” and “Women in Azure AI,” contributing to AI Skills enhancements. Her primary areas of expertise include courses on Development, Data, and AI. In addition to maintaining and acquiring new certifications in Data and AI, she has also guided learners and enthusiasts on their educational journeys. Priyanka is an active member of the Microsoft Tech community, where she reviews and writes blogs focusing on Data and AI. #SkilledByMTT #MSLearn #MTTBloggingGroup313Views1like0CommentsThe Future of AI: Reduce AI Provisioning Effort - Jumpstart your solutions with AI App Templates
In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI App template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. In this post, we answer that question with a closer look at Azure AI App templates - what they are, and how we can jumpstart our productivity with a reuse-and-extend approach that builds on open-source samples for core application architectures.493Views0likes0CommentsThe Future of AI: Harnessing AI agents for Customer Engagements
Discover how AI-powered agents are revolutionizing customer engagement—enhancing real-time support, automating workflows, and empowering human professionals with intelligent orchestration. Explore the future of AI-driven service, including Customer Assist created with Azure AI Foundry.780Views2likes0CommentsModel Mondays S2E12: Models & Observability
1. Weekly Highlights This week’s top news in the Azure AI ecosystem included: GPT Real Time (GA): Azure AI Foundry now offers GPT Real Time (GA)—lifelike voices, improved instruction following, audio fidelity, and function calling, with support for image context and lower pricing. Read the announcement and check out the model card for more details. Azure AI Translator API (Public Preview): Choose between fast Neural Machine Translation (NMT) or nuanced LLM-powered translations, with real-time flexibility for multilingual workflows. Read the announcement then check out the Azure AI Translator documentation for more details. Azure AI Foundry Agents Learning Plan: Build agents with autonomous goal pursuit, memory, collaboration, and deep fine-tuning (SFT, RFT, DPO) - on Azure AI Foundry. Read the announcement what Agentic AI involves - then follow this comprehensive learning plan with step-by-step guidance. CalcLM Agent Grid (Azure AI Foundry Labs): Project CalcLM: Agent Grid is a prototype and open-source experiment that illustrates how agents might live in a grid-like surface (like Excel). It's formula-first and lightweight - defining agentic workflows like calculations. Try the prototype and visit Foundry Labs to learn more. Agent Factory Blog: Observability in Agentic AI: Agentic AI tools and workflows are gaining rapid adoption in the enterprise. But delivering safe, reliable and performant agents requires foundation support for Observability. Read the 6-part Agent Factory series and check out the Top 5 agent observability best practices for reliable AI blog post for more details. 2. Spotlight On: Observability in Azure AI Foundry This week’s spotlight featured a deep dive and demo by Han Che (Senior PM, Core AI/ Microsoft ), showing observability end-to-end for agent workflows. Why Observability? Ensures AI quality, performance, and safety throughout the development lifecycle. Enables monitoring, root cause analysis, optimization, and governance for agents and models. Key Features & Demos: Development Lifecycle: Leaderboard: Pick the best model for your agent with real-time evaluation. Playground: Chat and prototype agents, view instant quality and safety metrics. Evaluators: Assess quality, risk, safety, intent resolution, tool accuracy, code vulnerability, and custom metrics. Governance: Integrate with partners like Cradle AI and SideDot for policy mapping and evidence archiving. Red Teaming Agent: Automatically test for vulnerabilities and unsafe behavior. CI/CD Integration: Automate evaluation in GitHub Actions and Azure DevOps pipelines. Azure DevOps GitHub Actions Monitoring Dashboard: Resource usage, application analytics, input/output tokens, request latency, cost breakdown, and evaluation scores. Azure Cost Management SDKs & Local Evaluation: Run evaluations locally or in the cloud with the Azure AI Evaluation SDK. Demo Highlights: Chat with a travel planning agent, view run metrics and tool usage. Drill into run details, debugging, and real-time safety/quality scores. Configure and run large-scale agent evaluations in CI/CD pipelines. Compare agents, review statistical analysis, and monitor in production dashboards 3. Customer Story: Saifr Saifr is a RegTech company that uses artificial intelligence to streamline compliance for marketing, communications, and creative teams in regulated industries. Incubated at Fidelity Labs (Fidelity Investments’ innovation arm), Saifr helps enterprises create, review, and approve content that meets regulatory standards—faster and with less manual effort. What Saifr Offers AI-Powered Compliance: Saifr’s platform leverages proprietary AI models trained on decades of regulatory expertise to automatically detect potential compliance risks in text, images, audio, and video. Automated Guardrails: The solution flags risky or non-compliant language, suggests compliant alternatives, and provides explanations—all in real time. Workflow Integration: Saifr seamlessly integrates with enterprise content creation and approval workflows, including cloud platforms and agentic AI systems like Azure AI Foundry. Multimodal Support: Goes beyond text to check images, videos, and audio for compliance risks, supporting modern marketing and communications teams. 4. Key Takeaways Observability is Essential: Azure AI Foundry offers complete monitoring, evaluation, tracing, and governance for agentic AI—making production safe, reliable, and compliant. Built-In Evaluation and Red Teaming: Use leaderboards, evaluators, and red teaming agents to assess and continuously improve model safety and quality. CI/CD and Dashboard Integration: Automate evaluations in GitHub Actions or Azure DevOps, then monitor and optimize agents in production with detailed dashboards. Compliance Made Easy: Safer’s agents and models help financial services and regulated industries proactively meet compliance standards for content and communications. Sharda's Tips: How I Wrote This Blog I focus on organizing highlights, summarizing customer stories, and linking to official Microsoft docs and real working resources. For this recap, I explored the Azure AI Foundry Observability docs, tested CI/CD pipeline integration, and watched the customer demo to share best practices for regulated industries. Here’s my Copilot prompt for this episode: "Generate a technical blog post for Model Mondays S2E12 based on the transcript and episode details. Focus on observability, agent dashboards, CI/CD, compliance, and customer stories. Add correct, working Microsoft links!" Coming Up Next Week Next week: Open Source Models! Join us for the final episode with Hugging Face VP of Product, live demos, and open model workflows. Register For The Livestream – Sep 15, 2025 About Model Mondays Model Mondays is your weekly Azure AI learning series: 5-Minute Highlights: Latest AI news and product updates 15-Minute Spotlight: Demos and deep dives with product teams 30-Minute AMA Fridays: Ask anything in Discord or the forum Start building: Watch Past Replays Register For AMA Recap Past AMAs Join The Community Don’t build alone! The Azure AI Developer Community is here for real-time chats, events, and support: Join the Discord Explore the Forum About Me I'm Sharda, a Gold Microsoft Learn Student Ambassador focused on cloud and AI. Find me on GitHub, Dev.to, Tech Community, and LinkedIn. In this blog series, I share takeaways from each week’s Model Mondays livestream.177Views0likes0Comments