semantic kernel
56 TopicsThe Future of AI: The paradigm shifts in Generative AI Operations
Dive into the transformative world of Generative AI Operations (GenAIOps) with Microsoft Azure. Discover how businesses are overcoming the challenges of deploying and scaling generative AI applications. Learn about the innovative tools and services Azure AI offers, and how they empower developers to create high-quality, scalable AI solutions. Explore the paradigm shift from MLOps to GenAIOps and see how continuous improvement practices ensure your AI applications remain cutting-edge. Join us on this journey to harness the full potential of generative AI and drive operational excellence.7.3KViews1like1CommentThe Future of AI: Customizing AI agents with the Semantic Kernel agent framework
The blog post Customizing AI agents with the Semantic Kernel agent framework discusses the capabilities of the Semantic Kernel SDK, an open-source tool developed by Microsoft for creating AI agents and multi-agent systems. It highlights the benefits of using single-purpose agents within a multi-agent system to achieve more complex workflows with improved efficiency. The Semantic Kernel SDK offers features like telemetry, hooks, and filters to ensure secure and responsible AI solutions, making it a versatile tool for both simple and complex AI projects.1.9KViews3likes0CommentsThe Future of AI: Computer Use Agents Have Arrived
Discover the groundbreaking advancements in AI with Computer Use Agents (CUAs). In this blog, Marco Casalaina shares how to use the Responses API from Azure OpenAI Service, showcasing how CUAs can launch apps, navigate websites, and reason through tasks. Learn how CUAs utilize multimodal models for computer vision and AI frameworks to enhance automation. Explore the differences between CUAs and traditional Robotic Process Automation (RPA), and understand how CUAs can complement RPA systems. Dive into the future of automation and see how CUAs are set to revolutionize the way we interact with technology.11KViews6likes0CommentsEssential Microsoft Resources for MVPs & the Tech Community from the AI Tour
Unlock the power of Microsoft AI with redeliverable technical presentations, hands-on workshops, and open-source curriculum from the Microsoft AI Tour! Whether you’re a Microsoft MVP, Developer, or IT Professional, these expertly crafted resources empower you to teach, train, and lead AI adoption in your community. Explore top breakout sessions covering GitHub Copilot, Azure AI, Generative AI, and security best practices—designed to simplify AI integration and accelerate digital transformation. Dive into interactive workshops that provide real-world applications of AI technologies. Take it a step further with Microsoft’s Open-Source AI Curriculum, offering beginner-friendly courses on AI, Machine Learning, Data Science, Cybersecurity, and GitHub Copilot—perfect for upskilling teams and fostering innovation. Don’t just learn—lead. Access these resources, host impactful training sessions, and drive AI adoption in your organization. Start sharing today! Explore now: Microsoft AI Tour Resources.Azure AI Foundry: Advancing OpenTelemetry and delivering unified multi-agent observability
Microsoft is enhancing multi-agent observability by introducing new semantic conventions to OpenTelemetry, developed collaboratively with Outshift, Cisco’s incubation engine. These additions—built upon OpenTelemetry and W3C Trace Context—establish standardized practices for tracing and telemetry within multi-agent systems, facilitating consistent logging of key metrics for quality, performance, safety, and cost. This systematic approach enables more comprehensive visibility into multi-agent workflows, including tool invocations and collaboration. These advancements have been integrated into Azure AI Foundry, Microsoft Agent Framework, Semantic Kernel, and Azure AI packages for LangChain, LangGraph, and the OpenAI Agents SDK, enabling customers to get unified observability for agentic systems built using any of these frameworks with Azure AI Foundry. The additional semantic conventions and integration across different frameworks equip developers to monitor, troubleshoot, and optimize their AI agents in a unified solution with increased efficiency and valuable insights. “Outshift, Cisco's Incubation Engine, worked with Microsoft to add new semantic conventions in OpenTelemetry. These conventions standardize multi-agent observability and evaluation, giving teams comprehensive insights into their AI systems.” Giovanna Carofiglio, Distinguished Engineer, Cisco Multi-agent observability challenges Multi-agent systems involve multiple interacting agents with diverse roles and architectures. Such systems are inherently dynamic—they adapt in real time by decomposing complex tasks into smaller, manageable subtasks and distributing them across specialized agents. Each agent may invoke multiple tools, often in parallel or sequence, to fulfill its part of the task, resulting in emergent workflows that are non-linear and highly context dependent. Given the dynamic nature of multi-agent systems and the management across agents, observability becomes critical for debugging, performance tuning, security, and compliance for such systems. Multi-agent observability presents unique challenges that traditional GenAI telemetry standards fail to address. Traditional observability conventions are optimized for single-agent reasoning path visibility and lack the semantic depth to capture collaborative workflows across multiple agents. In multi-agent systems, tasks are often decomposed and distributed dynamically, requiring visibility into agent roles, task hierarchies, tool usage, and decision-making processes. Without standardized task spans and a unified namespace, it's difficult to trace cross-agent coordination, evaluate task outcomes, or analyze retry patterns. These gaps hinder white-box observability, making it hard to assess performance, safety, and quality across complex agentic workflows. Extending OpenTelemetry with multi-agent observability Microsoft has proposed new spans and attributes to OpenTelemetry semantic convention for GenAI agent and framework spans. They will enrich insights and capture the complexity of agent, tool, tasks and plans interactions in multi-agent systems. Below is a list of all the new additions proposed to OpenTelemetry. New Span/Trace/Attributes Name Purpose New span execute_task Captures task planning and event propagation, providing insights into how tasks are decomposed and distributed. New child spans under “invoke_agent” agent_to_agent_interaction Traces communication between agents agent.state.management Effective context, short or long term memory management agent_planning Logs the agent’s internal planning steps agent orchestration Capture agent-to-agent orchestration New attributes in invoke_agent span tool_definitions Describes the tool’s purpose or configuration llm_spans Records model call spans New attributes in “execute_tool” span tool.call.arguments Logs the arguments passed during tool invocation tool.call.results Records the results returned by the tool New event Evaluation - attributes (name, error.type, label) Enables structured evaluation of agent performance and decision-making More details can be found in the following pull-requests merged into OpenTelemetry Add tool definition plus tool-related attributes in invoke-agent, inference, and execute-tool spans Capture evaluation results for GenAI applications Azure AI Foundry delivers unified observability for Microsoft Agent Framework, LangChain, LangGraph, OpenAI Agents SDK Agents built with Azure AI Foundry (SDK or portal) get out-of-the box observability in Foundry. With the new addition, agents built on different frameworks including Microsoft Agent Framework, Semantic Kernel, LangChain, LangGraph and OpenAI Agents SDK can use Foundry for monitoring, analyzing, debugging and evaluation with full observability. Agents built on Microsoft Agent Framework and Semantic Kernel get out-of-box tracing and evaluations support in Foundry Observability. Agents built with LangChain, LangGraph and OpenAI Agents SDK can use the corresponding packages and detailed instructions listed in the documentation to get tracing and evaluations support in Foundry Observability. Customer benefits With standardized multi-agent observability and support across multiple agent frameworks, customers get the following benefits: Unified Observability Platform for Multi-agent systems: Foundry Observability is the unified multi-agent observability platform for agents built with Foundry SDK or other agent frameworks like Microsoft Agent Framework, LangGraph, Lang Chain, OpenAI SDK. End-to-End Visibility into multi-agent systems: Customers can see not just what the system did, but how and why—from user request, through agent collaboration, tool usage, to final output. Faster Debugging & Root Cause Analysis: When something goes wrong (e.g., wrong answer, safety violation, performance bottleneck), customers can trace the exact path, see which agent/tool/task failed, and why. Quality & Safety Assurance: Built-in metrics and evaluation events (e.g. task success and validation scores) help customers monitor and improve the quality and safety of their AI workflows. Cost & Performance Optimization: Detailed metrics on token usage, API calls, resource consumption, and latency help customers optimize efficiency and cost. Get Started today with end-to-end agent observability with Foundry Azure AI Foundry Observability is a unified solution for evaluating, monitoring, tracing, and governing the quality, performance, and safety of your AI systems end-to-end— all built into your AI development loop and backed by the power of Azure Monitor for full stack observability. From model selection to real-time debugging, Foundry Observability capabilities empower teams to ship production-grade AI with confidence and speed. It’s observability, reimagined for the enterprise AI era. With the above OpenTelemetry enhancements Azure AI Foundry now provides detailed multi-agent observability for agents built with different frameworks including Azure AI Foundry, Microsoft Agent Framework, LangChain, LangGraph and OpenAI Agents SDK. Learn more about Azure AI Foundry Observability and get end-to-end agent observability today!1.5KViews3likes0CommentsBuilding AI Apps with the Foundry Local C# SDK
What Is Foundry Local? Foundry Local is a lightweight runtime designed to run AI models directly on user devices. It supports a wide range of hardware (CPU, GPU, NPU) and provides a consistent developer experience across platforms. The SDKs are available in multiple languages, including Python, JavaScript, Rust, and now C#. Why a C# SDK? The C# SDK brings Foundry Local into the heart of the .NET ecosystem. It allows developers to: Download and manage models locally. Run inference using OpenAI-compatible APIs. Integrate seamlessly with existing .NET applications. This means you can build intelligent apps that run offline, reduce latency, and maintain data privacy—all without sacrificing developer productivity. Bootstrap Process: How the SDK Gets You Started One of the most developer-friendly aspects of the C# SDK is its automatic bootstrap process. Here's what happens under the hood when you initialise the SDK: Service Discovery and Startup The SDK automatically locates the Foundry Local installation on the device and starts the inference service if it's not already running. Model Download and Caching If the specified model isn't already cached locally, the SDK will download the most performant model variant (e.g. GPU, CPU, NPU) for the end user's hardware from the Foundry model catalog. This ensures you're always working with the latest optimised version. Model Loading into Inference Service Once downloaded (or retrieved from cache), the model is loaded into the Foundry Local inference engine, ready to serve requests. This streamlined process means developers can go from zero to inference with just a few lines of code—no manual setup or configuration required. Leverage Your Existing AI Stack One of the most exciting aspects of the Foundry Local C# SDK is its compatibility with popular AI tools such as: OpenAI SDK - Foundry local provides an OpenAI compliant chat completions (and embedding) API meaning. If you’re already using `OpenAI` chat completions API, you can reuse your existing code with minimal changes. Semantic Kernel - Foundry Local also integrates well with Semantic Kernel, Microsoft’s open-source framework for building AI agents. You can use Foundry Local models as plugins or endpoints within Semantic Kernel workflows—enabling advanced capabilities like memory, planning, and tool calling. Quick Start Example Follow these three steps: 1. Create a new project Create a new C# project and navigate to it: dotnet new console -n hello-foundry-local cd hello-foundry-local 2. Install NuGet packages Install the following NuGet packages into your project: dotnet add package Microsoft.AI.Foundry.Local --version 0.1.0 dotnet add package OpenAI --version 2.2.0-beta.4 3. Use the OpenAI SDK with Foundry Local The following example demonstrates how to use the OpenAI SDK with Foundry Local. The code initializes the Foundry Local service, loads a model, and generates a response using the OpenAI SDK. Copy-and-paste the following code into a C# file named Program.cs: using Microsoft.AI.Foundry.Local; using OpenAI; using OpenAI.Chat; using System.ClientModel; using System.Diagnostics.Metrics; var alias = "phi-3.5-mini"; var manager = await FoundryLocalManager.StartModelAsync(aliasOrModelId: alias); var model = await manager.GetModelInfoAsync(aliasOrModelId: alias); ApiKeyCredential key = new ApiKeyCredential(manager.ApiKey); OpenAIClient client = new OpenAIClient(key, new OpenAIClientOptions { Endpoint = manager.Endpoint }); var chatClient = client.GetChatClient(model?.ModelId); var completionUpdates = chatClient.CompleteChatStreaming("Why is the sky blue'"); Console.Write($"[ASSISTANT]: "); foreach (var completionUpdate in completionUpdates) { if (completionUpdate.ContentUpdate.Count > 0) { Console.Write(completionUpdate.ContentUpdate[0].Text); } } Run the code using the following command: dotnet run Final thoughts The Foundry Local C# SDK empowers developers to build intelligent, privacy-preserving applications that run anywhere. Whether you're working on desktop, mobile, or embedded systems, this SDK offers a robust and flexible way to bring AI closer to your users. Ready to get started? Dive into the official documentation: Getting started guide C# Reference documentation You can also make contributions to the C# SDK by creating a PR on GitHub: Foundry Local on GitHub394Views0likes0CommentsBuild Custom Engine Agents in AI Foundry for Microsoft 365 Copilot
If you already have a multi‑agent AI application, you can surface it inside Microsoft 365 Copilot without adding another orchestration layer. Use a thin “proxy agent” built with the Microsoft 365 Agents SDK to handle Copilot activities and forward a simple request to your existing backend (in this example, we will use a simple Semantic Kernel multi‑agent workflow on top of Azure AI Foundry that writes and SEO‑optimizes blog posts). Develop fast and deploy to Azure with the Microsoft 365 Agents Toolkit for VS Code.850Views2likes0CommentsTransforming Enterprise AKS: Multi-Tenancy at Scale with Agentic AI and Semantic Kernel
In this post, I’ll show how you can deploy an AI Agent on Azure Kubernetes Service (AKS) using a multi-tenant approach that maximizes both security and cost efficiency. By isolating each tenant’s agent instance within the cluster and ensuring that every agent has access only to its designated Azure Blob Storage container, cross-tenant data leakage risks are eliminated. This model allows you to allocate compute and storage resources per tenant, optimizing usage and spending while maintaining strong data segregation and operational flexibility—key requirements for scalable, enterprise-grade AI applications.