sdk
58 TopicsThe Future of AI: Computer Use Agents Have Arrived
Discover the groundbreaking advancements in AI with Computer Use Agents (CUAs). In this blog, Marco Casalaina shares how to use the Responses API from Azure OpenAI Service, showcasing how CUAs can launch apps, navigate websites, and reason through tasks. Learn how CUAs utilize multimodal models for computer vision and AI frameworks to enhance automation. Explore the differences between CUAs and traditional Robotic Process Automation (RPA), and understand how CUAs can complement RPA systems. Dive into the future of automation and see how CUAs are set to revolutionize the way we interact with technology.11KViews6likes0CommentsAzure AI Foundry: Advancing OpenTelemetry and delivering unified multi-agent observability
Microsoft is enhancing multi-agent observability by introducing new semantic conventions to OpenTelemetry, developed collaboratively with Outshift, Cisco’s incubation engine. These additions—built upon OpenTelemetry and W3C Trace Context—establish standardized practices for tracing and telemetry within multi-agent systems, facilitating consistent logging of key metrics for quality, performance, safety, and cost. This systematic approach enables more comprehensive visibility into multi-agent workflows, including tool invocations and collaboration. These advancements have been integrated into Azure AI Foundry, Microsoft Agent Framework, Semantic Kernel, and Azure AI packages for LangChain, LangGraph, and the OpenAI Agents SDK, enabling customers to get unified observability for agentic systems built using any of these frameworks with Azure AI Foundry. The additional semantic conventions and integration across different frameworks equip developers to monitor, troubleshoot, and optimize their AI agents in a unified solution with increased efficiency and valuable insights. “Outshift, Cisco's Incubation Engine, worked with Microsoft to add new semantic conventions in OpenTelemetry. These conventions standardize multi-agent observability and evaluation, giving teams comprehensive insights into their AI systems.” Giovanna Carofiglio, Distinguished Engineer, Cisco Multi-agent observability challenges Multi-agent systems involve multiple interacting agents with diverse roles and architectures. Such systems are inherently dynamic—they adapt in real time by decomposing complex tasks into smaller, manageable subtasks and distributing them across specialized agents. Each agent may invoke multiple tools, often in parallel or sequence, to fulfill its part of the task, resulting in emergent workflows that are non-linear and highly context dependent. Given the dynamic nature of multi-agent systems and the management across agents, observability becomes critical for debugging, performance tuning, security, and compliance for such systems. Multi-agent observability presents unique challenges that traditional GenAI telemetry standards fail to address. Traditional observability conventions are optimized for single-agent reasoning path visibility and lack the semantic depth to capture collaborative workflows across multiple agents. In multi-agent systems, tasks are often decomposed and distributed dynamically, requiring visibility into agent roles, task hierarchies, tool usage, and decision-making processes. Without standardized task spans and a unified namespace, it's difficult to trace cross-agent coordination, evaluate task outcomes, or analyze retry patterns. These gaps hinder white-box observability, making it hard to assess performance, safety, and quality across complex agentic workflows. Extending OpenTelemetry with multi-agent observability Microsoft has proposed new spans and attributes to OpenTelemetry semantic convention for GenAI agent and framework spans. They will enrich insights and capture the complexity of agent, tool, tasks and plans interactions in multi-agent systems. Below is a list of all the new additions proposed to OpenTelemetry. New Span/Trace/Attributes Name Purpose New span execute_task Captures task planning and event propagation, providing insights into how tasks are decomposed and distributed. New child spans under “invoke_agent” agent_to_agent_interaction Traces communication between agents agent.state.management Effective context, short or long term memory management agent_planning Logs the agent’s internal planning steps agent orchestration Capture agent-to-agent orchestration New attributes in invoke_agent span tool_definitions Describes the tool’s purpose or configuration llm_spans Records model call spans New attributes in “execute_tool” span tool.call.arguments Logs the arguments passed during tool invocation tool.call.results Records the results returned by the tool New event Evaluation - attributes (name, error.type, label) Enables structured evaluation of agent performance and decision-making More details can be found in the following pull-requests merged into OpenTelemetry Add tool definition plus tool-related attributes in invoke-agent, inference, and execute-tool spans Capture evaluation results for GenAI applications Azure AI Foundry delivers unified observability for Microsoft Agent Framework, LangChain, LangGraph, OpenAI Agents SDK Agents built with Azure AI Foundry (SDK or portal) get out-of-the box observability in Foundry. With the new addition, agents built on different frameworks including Microsoft Agent Framework, Semantic Kernel, LangChain, LangGraph and OpenAI Agents SDK can use Foundry for monitoring, analyzing, debugging and evaluation with full observability. Agents built on Microsoft Agent Framework and Semantic Kernel get out-of-box tracing and evaluations support in Foundry Observability. Agents built with LangChain, LangGraph and OpenAI Agents SDK can use the corresponding packages and detailed instructions listed in the documentation to get tracing and evaluations support in Foundry Observability. Customer benefits With standardized multi-agent observability and support across multiple agent frameworks, customers get the following benefits: Unified Observability Platform for Multi-agent systems: Foundry Observability is the unified multi-agent observability platform for agents built with Foundry SDK or other agent frameworks like Microsoft Agent Framework, LangGraph, Lang Chain, OpenAI SDK. End-to-End Visibility into multi-agent systems: Customers can see not just what the system did, but how and why—from user request, through agent collaboration, tool usage, to final output. Faster Debugging & Root Cause Analysis: When something goes wrong (e.g., wrong answer, safety violation, performance bottleneck), customers can trace the exact path, see which agent/tool/task failed, and why. Quality & Safety Assurance: Built-in metrics and evaluation events (e.g. task success and validation scores) help customers monitor and improve the quality and safety of their AI workflows. Cost & Performance Optimization: Detailed metrics on token usage, API calls, resource consumption, and latency help customers optimize efficiency and cost. Get Started today with end-to-end agent observability with Foundry Azure AI Foundry Observability is a unified solution for evaluating, monitoring, tracing, and governing the quality, performance, and safety of your AI systems end-to-end— all built into your AI development loop and backed by the power of Azure Monitor for full stack observability. From model selection to real-time debugging, Foundry Observability capabilities empower teams to ship production-grade AI with confidence and speed. It’s observability, reimagined for the enterprise AI era. With the above OpenTelemetry enhancements Azure AI Foundry now provides detailed multi-agent observability for agents built with different frameworks including Azure AI Foundry, Microsoft Agent Framework, LangChain, LangGraph and OpenAI Agents SDK. Learn more about Azure AI Foundry Observability and get end-to-end agent observability today!4KViews3likes0CommentsThe Future of AI: Autonomous Agents for Identifying the Root Cause of Cloud Service Incidents
Discover how Microsoft is transforming cloud service incident management with autonomous AI agents. Learn how AI-enhanced troubleshooting guides and agentic workflows are reducing downtime and empowering on-call engineers.2.7KViews3likes1CommentThe Future of AI: Customizing AI agents with the Semantic Kernel agent framework
The blog post Customizing AI agents with the Semantic Kernel agent framework discusses the capabilities of the Semantic Kernel SDK, an open-source tool developed by Microsoft for creating AI agents and multi-agent systems. It highlights the benefits of using single-purpose agents within a multi-agent system to achieve more complex workflows with improved efficiency. The Semantic Kernel SDK offers features like telemetry, hooks, and filters to ensure secure and responsible AI solutions, making it a versatile tool for both simple and complex AI projects.1.9KViews3likes0CommentsThe Future of AI: Developing Code Assist – a Multi-Agent Tool
Discover how Code Assist, created with Azure AI Foundry Agent Service, uses AI agents to automate code documentation, generate business-ready slides, and detect security risks in large codebases—boosting developer productivity and project clarity.1.3KViews2likes1CommentThe Future of AI: Power Your Agents with Azure Logic Apps
Building intelligent applications no longer requires complex coding. With advancements in technology, you can now create agents using cloud-based tools to automate workflows, connect to various services, and integrate business processes across hybrid environments without writing any code.3.4KViews2likes1CommentIgnite 2024: Streamlining AI Development with an Enhanced User Interface, Accessibility, and Learning Experiences in Azure AI Foundry portal
Announcing Azure AI Foundry, a unified platform that simplifies AI development and management. The platform portal (formerly Azure AI Studio) features a revamped user interface, enhanced model catalog, new management center, improved accessibility and learning, making it easier than ever for Developers and IT Admins to design, customize, and manage AI apps and agents efficiently.6KViews2likes0CommentsUnlock the Potential of AI in Your Apps with Semantic Kernel: A Lightweight SDK for LLMs
Semantic Kernel (SK) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI. SK supports prompt templating, function chaining, vectorized memory, and intelligent planning capabilities out of the box11KViews2likes0Comments