azure ai agent service
29 TopicsIntegrate Custom Azure AI Agents with CoPilot Studio and M365 CoPilot
Integrating Custom Agents with Copilot Studio and M365 Copilot In today's fast-paced digital world, integrating custom agents with Copilot Studio and M365 Copilot can significantly enhance your company's digital presence and extend your CoPilot platform to your enterprise applications and data. This blog will guide you through the integration steps of bringing your custom Azure AI Agent Service within an Azure Function App, into a Copilot Studio solution and publishing it to M365 and Teams Applications. When Might This Be Necessary: Integrating custom agents with Copilot Studio and M365 Copilot is necessary when you want to extend customization to automate tasks, streamline processes, and provide better user experience for your end-users. This integration is particularly useful for organizations looking to streamline their AI Platform, extend out-of-the-box functionality, and leverage existing enterprise data and applications to optimize their operations. Custom agents built on Azure allow you to achieve greater customization and flexibility than using Copilot Studio agents alone. What You Will Need: To get started, you will need the following: Azure AI Foundry Azure OpenAI Service Copilot Studio Developer License Microsoft Teams Enterprise License M365 Copilot License Steps to Integrate Custom Agents: Create a Project in Azure AI Foundry: Navigate to Azure AI Foundry and create a project. Select 'Agents' from the 'Build and Customize' menu pane on the left side of the screen and click the blue button to create a new agent. Customize Your Agent: Your agent will automatically be assigned an Agent ID. Give your agent a name and assign the model your agent will use. Customize your agent with instructions: Add your knowledge source: You can connect to Azure AI Search, load files directly to your agent, link to Microsoft Fabric, or connect to third-party sources like Tripadvisor. In our example, we are only testing the CoPilot integration steps of the AI Agent, so we did not build out additional options of providing grounding knowledge or function calling here. Test Your Agent: Once you have created your agent, test it in the playground. If you are happy with it, you are ready to call the agent in an Azure Function. Create and Publish an Azure Function: Use the sample function code from the GitHub repository to call the Azure AI Project and Agent. Publish your Azure Function to make it available for integration. azure-ai-foundry-agent/function_app.py at main · azure-data-ai-hub/azure-ai-foundry-agent Connect your AI Agent to your Function: update the "AIProjectConnString" value to include your Project connection string from the project overview page of in the AI Foundry. Role Based Access Controls: We have to add a role for the function app on OpenAI service. Role-based access control for Azure OpenAI - Azure AI services | Microsoft Learn Enable Managed Identity on the Function App Grant "Cognitive Services OpenAI Contributor" role to the System-assigned managed identity to the Function App in the Azure OpenAI resource Grant "Azure AI Developer" role to the System-assigned managed identity for your Function App in the Azure AI Project resource from the AI Foundry Build a Flow in Power Platform: Before you begin, make sure you are working in the same environment you will use to create your CoPilot Studio agent. To get started, navigate to the Power Platform (https://make.powerapps.com) to build out a flow that connects your Copilot Studio solution to your Azure Function App. When creating a new flow, select 'Build an instant cloud flow' and trigger the flow using 'Run a flow from Copilot'. Add an HTTP action to call the Function using the URL and pass the message prompt from the end user with your URL. The output of your function is plain text, so you can pass the response from your Azure AI Agent directly to your Copilot Studio solution. Create Your Copilot Studio Agent: Navigate to Microsoft Copilot Studio and select 'Agents', then 'New Agent'. Make sure you are in the same environment you used to create your cloud flow. Now select ‘Create’ button at the top of the screen From the top menu, navigate to ‘Topics’ and ‘System’. We will open up the ‘Conversation boosting’ topic. When you first open the Conversation boosting topic, you will see a template of connected nodes. Delete all but the initial ‘Trigger’ node. Now we will rebuild the conversation boosting agent to call the Flow you built in the previous step. Select 'Add an Action' and then select the option for existing Power Automate flow. Pass the response from your Custom Agent to the end user and end the current topic. My existing Cloud Flow: Add action to connect to existing Cloud Flow: When this menu pops up, you should see the option to Run the flow you created. Here, mine does not have a very unique name, but you see my flow 'Run a flow from Copilot' as a Basic action menu item. If you do not see your cloud flow here add the flow to the default solution in the environment. Go to Solutions > select the All pill > Default Solution > then add the Cloud Flow you created to the solution. Then go back to Copilot Studio, refresh and the flow will be listed there. Now complete building out the conversation boosting topic: Make Agent Available in M365 Copilot: Navigate to the 'Channels' menu and select 'Teams + Microsoft 365'. Be sure to select the box to 'Make agent available in M365 Copilot'. Save and re-publish your Copilot Agent. It may take up to 24 hours for the Copilot Agent to appear in M365 Teams agents list. Once it has loaded, select the 'Get Agents' option from the side menu of Copilot and pin your Copilot Studio Agent to your featured agent list Now, you can chat with your custom Azure AI Agent, directly from M365 Copilot! Conclusion: By following these steps, you can successfully integrate custom Azure AI Agents with Copilot Studio and M365 Copilot, enhancing you’re the utility of your existing platform and improving operational efficiency. This integration allows you to automate tasks, streamline processes, and provide better user experience for your end-users. Give it a try! Curious of how to bring custom models from your AI Foundry to your CoPilot Studio solutions? Check out this blog16KViews3likes11CommentsIgnite 2024: Streamlining AI Development with an Enhanced User Interface, Accessibility, and Learning Experiences in Azure AI Foundry portal
Announcing Azure AI Foundry, a unified platform that simplifies AI development and management. The platform portal (formerly Azure AI Studio) features a revamped user interface, enhanced model catalog, new management center, improved accessibility and learning, making it easier than ever for Developers and IT Admins to design, customize, and manage AI apps and agents efficiently.6KViews2likes0CommentsBuild recap: new Azure AI Foundry resource, Developer APIs and Tools
At Microsoft Build 2025, we introduced Azure AI Foundry resource, Azure AI Foundry API, and supporting tools to streamline the end-to-end development lifecycle of AI agents and applications. These capabilities are designed to help developers accelerate time-to-market; support production-scale workloads with scale and central governance; and support administrators with a self-serve capability to enable their teams’ experimentation with AI in a controlled environment. The Azure AI Foundry resource type unifies agents, models and tools under a single management grouping, equipped with built-in enterprise-readiness capabilities — such as tracing & monitoring, agent and model-specific evaluation capabilities, and customizable enterprise setup configurations tailored to your organizational policies like using your own virtual networks. This launch represents our commitment to providing organizations with a consistent, efficient and centrally governable environment for building and operating the AI agents and applications of today, and tomorrow. New platform capabilities The new Foundry resource type evolves our vision for Azure AI Foundry as a unified Azure platform-as-a-service offering, enabling developers to focus on building applications rather than managing infrastructure, while taking advantage of native Azure platform capabilities like Azure Data and Microsoft Defender. Previously, Azure AI Foundry portal’s capabilities required the management of multiple Azure resources and SDKs to build an end-to-end application. New capabilities include: Foundry resource type enables administrators with a consistent way of managing security and access to Agents, Models, Projects, and Azure tooling Integration. With this change, Azure Role Based Access Control, Networking and Policies are administered under a single Azure resource provider namespace, for streamlined management. ‘Azure AI Foundry’ is a renaming of the former ‘Azure AI Services’ resource type, with access to new capabilities. While Azure AI Foundry still supports bring-your-own Azure resources, we now default to a fully Microsoft-managed experience, making it faster and easier to get started. Foundry projects are folders that enable developers to independently create new environments for exploring new ideas and building prototypes, while managing data in isolation. Projects are child resources; they may be assigned their own admin controls but by default share common settings such as networking or connected resource access from their parent resource. This principle aims to take IT admins out of the day-to-day loop once security and governance are established at the resource level, enabling developers to self-serve confidently within their projects. Azure AI Foundry API is designed from the ground up, to build and evaluate API-first agentic applications, and lets you work across model providers agnostically with a consistent contract. Azure AI Foundry SDK wraps the Foundry API making it easy to integrate capabilities into code whether your application is built in Python, C#, JavaScript/TypeScript or Java. Azure AI Foundry for VS Code Extension complements your workflow with capabilities to help you explore models, and develop agents and is now supported with the new Foundry project type. New built-in RBAC roles provide up-to-date role definitions to help admins differentiate access between Administrator, Project Manager and Project users. Foundry RBAC actions follow strict control- and data plane separation, making it easier to implement the principle of least privilege. Why we built these new platform capabilities If you are already building with Azure AI Foundry -- these capabilities are meant to simplify platform management, enhance workflows that span multiple models and tools, and reinforce governance capabilities, as we see AI workloads grow more complex. The emergence of generative AI fundamentally changed how customers build AI solutions, requiring capabilities that span multiple traditional domains. We launched Azure AI Foundry to provide a comprehensive toolkit for exploring, building and evaluating this new wave of GenAI solutions. Initially, this experience was backed by two core Azure services -- Azure AI Services for accessing models including those from OpenAI, and Azure Machine Learning’s hub, to access tools for orchestration and customization. With the emergence of AI agents composing models and tools; and production workloads demanding the enforcement of central governance across those, we are investing to bring the management of agents, models and their tooling integration layer together to best serve these workload’s requirements. The Azure AI Foundry resource and Foundry API are purposefully designed to unify and simplify the composition and management of core building blocks of AI applications: Models Agents & their tools Observability, Security, and Trust In this new era of AI, there is no one-size-fits-all approach to building AI agents and applications. That's why we designed the new platform as a comprehensive AI factory with modular, extensible, and interoperable components. Foundry Project vs Hub-Based Project Going forward, new agents and model-centric capabilities will only land on the new Foundry project type. This includes access to Foundry Agent Service in GA and Foundry API. While we are transitioning to Azure AI Foundry as a managed platform service, hub-based project type remains accessible in Azure AI Foundry portal for GenAI capabilities that are not yet supported by the new resource type. Hub-based projects will continue to support use cases for custom model training in Azure Machine Learning Studio, CLI and SDK. For a full overview of capabilities supported by each project type, see this support matrix. Azure AI Foundry Agent Service The Azure AI Foundry Agent Service experience, now generally available, is powered by the new Foundry project. Existing customers exploring the GA experience will need the new AI Foundry resource. All new investments in the Azure AI Foundry Agent Service are focused on the Foundry project experience. Foundry projects act as secure units of isolation and collaboration — agents within a project share: File storage Thread storage (i.e. conversation history) Search indexes You can also bring your own Azure resources (e.g., storage, bring-your-own virtual network) to support compliance and control over sensitive data. Start Building with Foundry Azure AI Foundry is your foundation for scalable, secure, and production-grade AI development. Whether you're building your first agent or deploying a multi-agent workforce at Scale, Azure AI Foundry is ready for what’s next.3.8KViews2likes0CommentsAzure AI Foundry: Advancing OpenTelemetry and delivering unified multi-agent observability
Microsoft is enhancing multi-agent observability by introducing new semantic conventions to OpenTelemetry, developed collaboratively with Outshift, Cisco’s incubation engine. These additions—built upon OpenTelemetry and W3C Trace Context—establish standardized practices for tracing and telemetry within multi-agent systems, facilitating consistent logging of key metrics for quality, performance, safety, and cost. This systematic approach enables more comprehensive visibility into multi-agent workflows, including tool invocations and collaboration. These advancements have been integrated into Azure AI Foundry, Microsoft Agent Framework, Semantic Kernel, and Azure AI packages for LangChain, LangGraph, and the OpenAI Agents SDK, enabling customers to get unified observability for agentic systems built using any of these frameworks with Azure AI Foundry. The additional semantic conventions and integration across different frameworks equip developers to monitor, troubleshoot, and optimize their AI agents in a unified solution with increased efficiency and valuable insights. “Outshift, Cisco's Incubation Engine, worked with Microsoft to add new semantic conventions in OpenTelemetry. These conventions standardize multi-agent observability and evaluation, giving teams comprehensive insights into their AI systems.” Giovanna Carofiglio, Distinguished Engineer, Cisco Multi-agent observability challenges Multi-agent systems involve multiple interacting agents with diverse roles and architectures. Such systems are inherently dynamic—they adapt in real time by decomposing complex tasks into smaller, manageable subtasks and distributing them across specialized agents. Each agent may invoke multiple tools, often in parallel or sequence, to fulfill its part of the task, resulting in emergent workflows that are non-linear and highly context dependent. Given the dynamic nature of multi-agent systems and the management across agents, observability becomes critical for debugging, performance tuning, security, and compliance for such systems. Multi-agent observability presents unique challenges that traditional GenAI telemetry standards fail to address. Traditional observability conventions are optimized for single-agent reasoning path visibility and lack the semantic depth to capture collaborative workflows across multiple agents. In multi-agent systems, tasks are often decomposed and distributed dynamically, requiring visibility into agent roles, task hierarchies, tool usage, and decision-making processes. Without standardized task spans and a unified namespace, it's difficult to trace cross-agent coordination, evaluate task outcomes, or analyze retry patterns. These gaps hinder white-box observability, making it hard to assess performance, safety, and quality across complex agentic workflows. Extending OpenTelemetry with multi-agent observability Microsoft has proposed new spans and attributes to OpenTelemetry semantic convention for GenAI agent and framework spans. They will enrich insights and capture the complexity of agent, tool, tasks and plans interactions in multi-agent systems. Below is a list of all the new additions proposed to OpenTelemetry. New Span/Trace/Attributes Name Purpose New span execute_task Captures task planning and event propagation, providing insights into how tasks are decomposed and distributed. New child spans under “invoke_agent” agent_to_agent_interaction Traces communication between agents agent.state.management Effective context, short or long term memory management agent_planning Logs the agent’s internal planning steps agent orchestration Capture agent-to-agent orchestration New attributes in invoke_agent span tool_definitions Describes the tool’s purpose or configuration llm_spans Records model call spans New attributes in “execute_tool” span tool.call.arguments Logs the arguments passed during tool invocation tool.call.results Records the results returned by the tool New event Evaluation - attributes (name, error.type, label) Enables structured evaluation of agent performance and decision-making More details can be found in the following pull-requests merged into OpenTelemetry Add tool definition plus tool-related attributes in invoke-agent, inference, and execute-tool spans Capture evaluation results for GenAI applications Azure AI Foundry delivers unified observability for Microsoft Agent Framework, LangChain, LangGraph, OpenAI Agents SDK Agents built with Azure AI Foundry (SDK or portal) get out-of-the box observability in Foundry. With the new addition, agents built on different frameworks including Microsoft Agent Framework, Semantic Kernel, LangChain, LangGraph and OpenAI Agents SDK can use Foundry for monitoring, analyzing, debugging and evaluation with full observability. Agents built on Microsoft Agent Framework and Semantic Kernel get out-of-box tracing and evaluations support in Foundry Observability. Agents built with LangChain, LangGraph and OpenAI Agents SDK can use the corresponding packages and detailed instructions listed in the documentation to get tracing and evaluations support in Foundry Observability. Customer benefits With standardized multi-agent observability and support across multiple agent frameworks, customers get the following benefits: Unified Observability Platform for Multi-agent systems: Foundry Observability is the unified multi-agent observability platform for agents built with Foundry SDK or other agent frameworks like Microsoft Agent Framework, LangGraph, Lang Chain, OpenAI SDK. End-to-End Visibility into multi-agent systems: Customers can see not just what the system did, but how and why—from user request, through agent collaboration, tool usage, to final output. Faster Debugging & Root Cause Analysis: When something goes wrong (e.g., wrong answer, safety violation, performance bottleneck), customers can trace the exact path, see which agent/tool/task failed, and why. Quality & Safety Assurance: Built-in metrics and evaluation events (e.g. task success and validation scores) help customers monitor and improve the quality and safety of their AI workflows. Cost & Performance Optimization: Detailed metrics on token usage, API calls, resource consumption, and latency help customers optimize efficiency and cost. Get Started today with end-to-end agent observability with Foundry Azure AI Foundry Observability is a unified solution for evaluating, monitoring, tracing, and governing the quality, performance, and safety of your AI systems end-to-end— all built into your AI development loop and backed by the power of Azure Monitor for full stack observability. From model selection to real-time debugging, Foundry Observability capabilities empower teams to ship production-grade AI with confidence and speed. It’s observability, reimagined for the enterprise AI era. With the above OpenTelemetry enhancements Azure AI Foundry now provides detailed multi-agent observability for agents built with different frameworks including Azure AI Foundry, Microsoft Agent Framework, LangChain, LangGraph and OpenAI Agents SDK. Learn more about Azure AI Foundry Observability and get end-to-end agent observability today!3.7KViews3likes0CommentsThe Future of AI: Power Your Agents with Azure Logic Apps
Building intelligent applications no longer requires complex coding. With advancements in technology, you can now create agents using cloud-based tools to automate workflows, connect to various services, and integrate business processes across hybrid environments without writing any code.3.4KViews2likes1CommentUnlock Multi-Modal Embed 4 and Multilingual Agentic RAG with Command A on Azure
Developers and enterprises now have immediate access to state-of-the-art generative and semantic models purpose-built for RAG (Retrieval-Augmented Generation) and agentic AI workflows on Azure AI Foundry to: Deploy high-performance LLMs and semantic search engines directly into production Build faster, more scalable, and multilingual RAG pipelines Leverage models that are optimized for enterprise workloads in finance, healthcare, government, and manufacturing Cohere Embed 4: High-Performance Embeddings for Search & RAG Accompanying Command A is Cohere’s Embed 4, a cutting-edge embedding model ideal for retrieval-augmented generation pipelines and semantic search. Embed 4 (the latest evolution of Cohere’s Embed series) converts text – and even images – into high-dimensional vector representations that capture semantic meaning. It’s a multi-modal, multilingual embedding model designed to provide recall and relevance in vector search, text classification, and clustering tasks. What makes Embed 4 stand out? 100+ Language Support: This model is truly global – it supports well over 100 languages for text embeddings. You can encode queries and documents in many languages (Arabic, Chinese, French, Hindi, etc.) into the same vector space, enabling cross-lingual search out of the box. For example, a question in Spanish can retrieve a relevant document originally in English if their ideas align semantically. Multi-Modal Embeddings: Embed 4 is capable of embedding not only text but also images. This means you can use it for multimodal search scenarios – e.g. indexing both textual content and images and allowing queries across them. Under the hood, the model has an image encoder; the Azure AI Foundry SDK provides an ImageEmbeddingsClient to generate embeddings from images. With this, you could embed a diagram or a screenshot and find text documents that are semantically related to that image’s content. Matryoshka Embeddings (Scalable Dimensions): A novel feature in Cohere’s Embed 4 is Matryoshka Representation Learning, which produces embeddings that can be truncated to smaller sizes with minimal loss in fidelity. In practice, the model can output a high-dimensional vector (e.g. 768 or 1024 dims) but you have the flexibility to use just the first 64, 128, 256, etc. dimensions if needed. These “nested” embeddings mean you can choose a vector size that balances accuracy vs. storage/query speed – smaller vectors save memory and compute while still preserving most of the semantic signal. This is great for enterprise deployments where vector database size and latency are critical. Enterprise Optimizations: Cohere has optimized Embed 4 for production use. It supports int8 quantization and binary embedding output natively, which can drastically reduce storage footprint and speed up similarity search with only minor impact on accuracy (useful for very large indexes). The model is also trained on massive datasets (including domain-specific data) to ensure robust performance on noisy enterprise text. It achieves state-of-the-art results on benchmark evaluations like MTEB, meaning you get retrieval quality on par with or better than other leading embeddings models (OpenAI, Google, etc.). For instance, Cohere’s previous embed model was top-tier on cross-language retrieval tasks and Embed4 further improves on that foundation. Cohere Command A: Generative Model for Enterprise AI Command A is Cohere’s latest flagship large language model, designed for high-performance text generation in demanding enterprise scenarios. It’s an instruction-tuned, conversational LLM that excels at complex tasks like multi-step reasoning, tool use (function calling), and retrieval-augmented generation. Command A features a massive 111B parameter Transformer architecture with 256K token context length – enabling it to handle extremely large inputs (hundreds of pages of text) in a single prompt without losing coherence. Source for above benchmarks : Introducing Command A: Max performance, minimal compute Some key capabilities of Command A include: Long Context (256K tokens): Using an innovative attention architecture (sliding window + global attention), Command A can ingest up to 256,000 tokens of text in one go. This enables use cases like analyzing lengthy financial reports or entire knowledge bases in a single prompt. Enterprise-Tuned Generation: Command A is optimized for business applications – it’s excellent at instructions, summarization, and especially RAG workflows where it integrates retrieved context and even cites sources to mitigate hallucinations. It supports tool calling (function calling) out-of-the-box so it can interact with external APIs or data sources as part of an Azure AI Agent. Multilingual Proficiency: Command A is good at multilingual use cases (covering all major business languages, with near leading performance in Japanese, Korean, and German). Efficient Deployment: Despite its size, Command A is engineered for efficiency – it delivers 150% higher throughput than its predecessor (Command R+ 08-2024) and requires only 2× A100/H100 GPUs to run. In practice this means lower latency. It also supports streaming token output, so applications can start receiving the response as it’s generated, keeping interaction latency low. Real-World Use Cases for Command A + Embed 4 With both a powerful generative model and a state-of-the-art embedding model at your fingertips, developers can build advanced AI solutions. Here are some real-world use cases unlocked by Command A and Embed 4 on Azure: Financial Report Summarization (RAG): Imagine ingesting thousands of pages of financial filings, earnings call transcripts, and market research into a vector store. Using Embed 4, you can embed and index all this text. When an analyst asks “What were the key revenue drivers mentioned in ACME Corp’s Q1 2025 report?”, you use the query embedding to retrieve the most relevant passages. Command A (with its 256K context) can then take those passages and generate a concise summary or answer with cited evidence. The model’s long context window means it can consider all retrieved chunks at once, and its enterprise tuning ensures factual, business-appropriate summaries. Legal Research Agent (Tool Use + Multilingual): For example a multinational law firm handling cross-border mergers and acquisitions. They have a vast repository of legal documents in multiple languages. Using Embed 4, they index these documents, creating multilingual embeddings. When a lawyer researches a specific legal precedent related to a merger in Germany, they can query in English. Embed 4 retrieves relevant German documents, and Command A summarizes key points, translates excerpts, and compares legal arguments across jurisdictions. Furthermore, Command A leverages tool calling (utilizing agentic capabilities) to retrieve additional information from external databases, such as company registration details and regulatory filings, integrating this data into its analysis to provide a comprehensive report. Technician Knowledge Assistant (RAG + Multilingual): Think of a utilities company committed to operational excellence, managing a vast network of critical equipment, including power generators, transformers, and distribution lines. They can leverage Command A, integrated with Embed 4, to index a comprehensive repository of equipment manuals, maintenance records, and sensor data in multiple languages. This enables technicians and engineers to access critical knowledge instantly. Technicians can ask questions in their native language about specific equipment issues, and Command A retrieves relevant manuals, troubleshooting guides, and past repair reports. It also guides technicians through complex maintenance procedures step-by-step, ensuring consistency and adherence to best practices. This empowers the company to optimize maintenance processes, improve overall equipment reliability, and enhance communication, ultimately achieving operational excellence. Multimodal Search & Indexing: With Embed 4’s image embedding capability, you can build search systems that go beyond text. For instance, a media company could index their image library by generating embeddings for each image (using Azure’s Image Embeddings client) and also index captions/descriptions. A user could then supply a query image (or a textual description) and retrieve both images and articles that are semantically similar to the query. This is useful for scenarios like finding slides similar to a given diagram, searching scanned invoices by content, or matching user-uploaded photos to reference documents. Getting Started: Deploying via Azure AI Foundry In Azure AI Foundry, Embed 4 can be used via the Embeddings API to encode text or images into vectors. Each text input is turned into a numeric vector (e.g. 1024-dimension float array) that you can store in a vector database or use for similarity comparisons. The embeddings are normalized for cosine similarity by default. You can also take advantage of Azure’s vector index or Azure Cognitive Search to directly build vector search on top of these model outputs. Image Source : Introducing Embed 4: Multimodal search for business One of the biggest benefits of using Azure AI Foundry is the ease of deployment for these models. Cohere’s Command A and Embed 4 are available in the model catalog – you can find their model cards and deploy them in just a few clicks. Azure Foundry supports serverless API endpoints for these models, meaning Microsoft hosts the inference infrastructure and scales it for you (with pay-as-you-go billing). Integration with Azure AI Agent Service: If you’re building an AI agent (a system that can orchestrate models and tools to perform tasks), Azure AI Agent Service makes it easy to incorporate these models. In the Agent Service, you can simply reference the deployed model by name as the agent’s reasoning LLM. For example, you could specify an agent that uses CohereCommandA as its model, and add tools like Azure Cognitive Search. The agent can then handle user requests by, say, using a Search tool (powered by Embed 4 vector index) and then passing the results to Command A for answer formulation – all managed by the Azure Agent framework. This lets you build production-grade agentic AI workflows that leverage Cohere’s models with minimal plumbing. In short, Azure provides the glue to connect Command A + Embed 4 + Tools into a coherent solution. Try Command A and Embed 4 today on Azure AI Foundry The availability of Cohere’s Command A and Embed 4 on Azure AI Foundry empowers developers to build the next generation of intelligent apps on a fully managed platform. You can now easily deploy a 256K-context LLM that rivals the best in the industry, alongside a high-performance embedding model that plugs into your search and retrieval pipelines. Whether it’s summarizing lengthy documents with cited facts, powering a multilingual enterprise assistant, enabling multimodal search experiences, or orchestrating complex tool-using agents – these models open up a world of possibilities. Azure AI Foundry makes it simple to integrate these capabilities into your solutions, with the security, compliance, and scalability of Azure’s cloud. We encourage you to try out Command A and Embed 4 in your own projects. Spin them up from the Azure model catalog, use the provided SDK examples to get started, and explore how they can elevate your applications’ intelligence. With Cohere’s models on Azure, you have cutting-edge AI at your fingertips, ready to deploy in production. We’re excited to see what you build with them!3.3KViews0likes0CommentsAnnouncing Model Fine-Tuning Collaborations: Weights & Biases, Scale AI, Gretel and Statsig
As AI continues to transform industries, the ability to fine-tune models and customize them for specific use cases has become more critical than ever. Fine-tuning can enable companies to align models with their unique business goals, ensuring that AI solutions deliver results with greater precision However, organizations face several hurdles in their model customization journey: Lack of end-to-end tooling: Organizations struggle with fine-tuning foundation models due to complex processes, and the absence of tracking and evaluation tools for modifications. Data scarcity and quality: Limited access to large, high-quality datasets, along with privacy issues and high costs, complicate model training and fine-tuning. Shortage of fine-tuning expertise and pre-trained models: Many companies lack specialized knowledge and access to refined models for fine-tuning. Insufficient experimentation tools: A lack of tools for ongoing experimentation in production limits optimization of key variables like model diversity and operational efficiency. To address these challenges, Azure AI Foundry is pleased to announce new collaborations with Weights & Biases, Scale AI, Gretel and Statsig to streamline the process of model fine-tuning and experimentation through advanced tools, synthetic data and specialized expertise. Weights & Biases integration with Azure OpenAI Service: Making end-to-end fine-tuning accessible with tooling The integration of Weights & Biases with Azure OpenAI Service offers a comprehensive end-to-end solution for enterprises aiming to fine-tune foundation models such as GPT-4, GPT-4o, and GPT-4o mini. This collaboration provides a seamless connection between Azure OpenAI Service and Weights and Biases Models which offers powerful capabilities for experiment tracking, visualization, model management, and collaboration. With the integration, users can also utilize Weights and Biases Weave to evaluate, monitor, and iterate on the performance of their fine-tuned models powered AI applications in real-time. Azure's scalable infrastructure allows organizations to handle the computational demands of fine-tuning, while Weights and Biases offers robust capabilities for fine-tuning experimentation and evaluation of LLM-powered applications. Whether optimizing GPT-4o for complex reasoning tasks or using the lightweight GPT-4o mini for real-time applications, the integration simplifies the customization of models to meet enterprise-specific needs. This collaboration addresses the growing demand for tailored AI models in industries such as retail and finance, where fine-tuning can significantly improve customer service chatbots or complex financial analysis. Azure Open AI Service and Weights & Biases integration is now available in public preview. For further details on Azure OpenAI Service and Weights & Biases integration including real-world use-cases and a demo, refer to the blog here. Scale AI and Azure Collaboration: Confidently Implement Agentic GenAI Solutions in Production Scale AI collaborates with Azure AI Foundry to offer advanced fine-tuning and model customization for enterprise use cases. It enhances the performance of Azure AI Foundry models by providing high-quality data transformation, fine-tuning and customization services, end-to-end solution development and specialized Generative AI expertise. This collaboration helps improve the performance of AI-driven applications and Azure AI services such as Azure AI Agent in Azure AI Foundry, while reducing production time and driving business impact. "Scale is excited to partner with Azure to help our customers transform their proprietary data into real business value with end-to-end GenAI Solutions, including model fine-tuning and customization in Azure." Vijay Karunamurthy, Field CTO, Scale AI Checkout a demo in BRK116 session showcasing how Scale AI’s fine-tuned models can improve agents in Azure AI Foundry and Copilot Studio. In the coming months, Scale AI will offer fine-tuning services for Azure AI Agents in Azure AI Foundry. For more details, please refer to this blog and start transforming your AI initiatives by exploring Scale AI on the Azure Marketplace. Gretel and Azure OpenAI Service Collaboration: Revolutionizing data pipeline for custom AI models Azure AI Foundry is collaborating with Gretel, a pioneer in synthetic data and privacy technology, to remove data bottlenecks and bring advanced AI development capabilities to our customers. Gretel's platform enables Azure users to generate high-quality datasets for ML and AI through multiple approaches - from prompts and seed examples to differential privacy-preserved synthetic data. This technology helps organizations overcome key challenges in AI development including data availability, privacy requirements, and high development costs with support for structured, unstructured, and hybrid text data formats. Through this collaboration, customers can seamlessly generate datasets tailored to their specific use cases and industry needs using Gretel, then use them directly in Azure OpenAI Service for fine-tuning. This integration greatly reduces both costs and time compared to traditional data labeling methods, while maintaining strong privacy and compliance standards. The collaboration enables new use cases for Azure AI Foundry customers who can now easily use synthetic data generated by Gretel for training and fine-tuning models. Some of the new use cases include cost-effective improvements for Small Language Models (SLMs), improved reasoning abilities of Large Language Models (LLMs), and scalable data generation from limited real-world examples. This value is already being realized by leading enterprises. “EY is leveraging the privacy-protected synthetic data to fine-tune Azure OpenAI Service models in the financial domain," said John Thompson, Global Client Technology AI Lead at EY. "Using this technology with differential privacy guarantees, we generate highly accurate synthetic datasets—within 1% of real data accuracy—that safeguard sensitive financial information and prevent PII exposure. This approach ensures model safety through privacy attack simulations and robust data quality reporting. With this integration, we can safely fine-tune models for our specific financial use cases while upholding the highest compliance and regulatory standards.” The Gretel integration with Azure OpenAI Service is available now through Gretel SDK. Explore this blog describing a finance industry case study and checkout details in technical documentation for fine-tuning Azure OpenAI Service models with synthetic data from Gretel. Visit this page to learn more Statsig and Azure Collaboration: Enabling Experimentation in AI Applications Statsig is a platform for feature management and experimentation that helps teams manage releases, run powerful experiments, and measure the performance of their products. Statsig and Azure AI Foundry are collaborating to enable customers to easily configure and run experiments (A/B tests) in Azure AI-powered applications, using Statsig SDKs in Python, NodeJS and .NET. With these Statsig SDKs, customers can manage the configuration of their AI applications, manage the release of new configurations, run A/B tests to optimize model and application performance, and automatically collect metrics at the model and application level. Please check out this page to learn more about the collaboration and get detailed documentation here. Conclusion The new collaborations between Azure and Weights & Biases, Scale AI, Gretel and Statsig represent a significant step forward in simplifying the process of AI model customization. These collaborations aim to address the common pain points associated with fine-tuning models, including lack of end-to-end tooling, data scarcity and privacy concerns, lack of expertise and experimentation tooling. Through these collaborations, Azure AI Foundry will empower organizations to fine-tune and customize models more efficiently, ultimately enabling faster, more accurate AI deployments. Whether it’s through better model tracking, access to synthetic data, or scalable data preparation services, these collaborations will help businesses unlock the full potential of AI.3.3KViews3likes1CommentThe Future of AI: Autonomous Agents for Identifying the Root Cause of Cloud Service Incidents
Discover how Microsoft is transforming cloud service incident management with autonomous AI agents. Learn how AI-enhanced troubleshooting guides and agentic workflows are reducing downtime and empowering on-call engineers.2.6KViews3likes1CommentAI Agents: Mastering Agentic RAG - Part 5
This blog post, Part 5 of a series on AI agents, explores Agentic RAG (Retrieval-Augmented Generation), a paradigm shift in how LLMs interact with external data. Unlike traditional RAG, Agentic RAG allows LLMs to autonomously plan their information retrieval process through an iterative loop of actions and evaluations. The post highlights the importance of the LLM "owning" the reasoning process, dynamically selecting tools and refining queries. It covers key implementation details, including iterative loops, tool integration, memory management, and handling failure modes. Practical use cases, governance considerations, and code examples demonstrating Agentic RAG with AutoGen, Semantic Kernel, and Azure AI Agent Service are provided. The post concludes by emphasizing the transformative potential of Agentic RAG and encourages further exploration through linked resources and previous blog posts in the series.2.6KViews1like0CommentsUpgrade your voice agent with Azure AI Voice Live API
Today, we are excited to announce the general availability of Voice Live API, which enables real-time speech-to-speech conversational experience through a unified API powered by generative AI models. With Voice Live API, developers can easily voice-enable any agent built with the Azure AI Foundry Agent Service. Azure AI Foundry Agent Service, enables the operation of agents that make decisions, invoke tools, and participate in workflows across development, deployment, and production. By eliminating the need to stitch together disparate components, Voice Live API offers a low latency, end-to-end solution for voice-driven experiences. As always, a diverse range of customers provided valuable feedback during the preview period. Along with announcing general availability, we are also taking this opportunity to address that feedback and improve the API. Following are some of the new features designed to assist developers and enterprises in building scalable, production-ready voice agents. More natively integrated GenAI models including GPT-Realtime Voice Live API enables developers to select from a range of advanced AI models designed for conversational applications, such as GPT-Realtime, GPT-5, GPT-4.1, Phi, and others. These models are natively supported and fully managed, eliminating the need for developers to manage model deployment or plan for capacity. These natively supported models may each have a distinct stage in their life cycle (e.g. public preview, generally available) and be subject to varying pricing structures. The table below lists the models supported in each pricing tier. Pricing Tier Generally Available In Public Preview Voice Live Pro GPT-Realtime, GPT-4.1, GPT-4o GPT-5 Voice Live Standard GPT-4o-mini, GPT-4.1-mini GPT-4o-Mini-Realtime, GPT-5-mini Voice Live Lite NA Phi-4-MM-Realtime, GPT-5-Nano, Phi-4-Mini Extended speech languages to 140+ Voice Live API now supports speech input in over 140 languages/locales. View all supported languages by configuration. Automatic multilingual configuration is enabled by default, using the multilingual model. Integrated with Custom Speech Developers need customization to better manage input and output for different use cases. Besides the support for Custom Voice released in May 2025, Voice Live now supports seamless integration with Custom Speech for improved speech recognition results. Developers can also improve speech input accuracy with phrase lists and refine speech synthesis pronunciation using custom lexicons, all without training a model. Learn how to customize speech and voice models for Voice Live API. Natural HD voices upgraded Neural HD voices in Azure AI Speech are contextually aware and engineered to provide a natural, expressive experience, making them ideal for voice agent applications. The latest V2 upgrade enhances lifelike qualities with features such as natural pauses, filler words, and seamless transitions between speaking styles, all available with Voice Live API. Check out the latest demo of Ava Neural HD V2. Improved VAD features for interruption detection Voice Live API now features semantic Voice Activity Detection (VAD), enabling it to intelligently recognize pauses and filler word interruptions in conversations. In the latest en-US evaluation on Multilingual filler words data, Voice Live API achieved ~20% relative improvement from previous VAD models. This leap in performance is powered by integrating semantic VAD into the n-best pipeline, allowing the system to better distinguish meaningful speech from filler noise and enabling more accurate latency tracking and cleaner segmentation, especially in multilingual and noisy environments. 4K avatar support Voice Live API enables efficient integration with streaming avatars. With the latest updates, avatar options offer support for high-fidelity 4K video models. Learn more about the avatar update. Improved function calling and integration with Azure AI Foundry Agent Service Voice Live API enables function calling to assist developers in building robust voice agents with their chosen generative AI models. This release improves asynchronous function calls and enhances integration with Azure AI Foundry Agent Service for agent creation and operation. Learn more about creating a voice live real-time voice agent with Azure AI Foundry Agent Service. More developer resources and availability in more regions Developer resources are available in C# and Python, with more to come. Get started with Voice Live API. Voice Live API is available in more regions now including Australia East, East US, Japan East, and UK South, besides the previously supported regions such as Central India, East US 2, South East Asia, Sweden Central, and West US 2. Check the features supported in each region. Customers adopting Voice Live In healthcare, patient experience is always the top priority. With Voice Live, eClinicalWorks’ healow Genie contact center solution is now taking healthcare modernization a step further. healow is piloting Voice Live API for Genie to inform patients about their upcoming appointments, answer common questions, and return voicemails. Reducing these routine calls saves healthcare staff hours each day and boosts patient satisfaction through timely interactions. “We’re looking forward to using Azure AI Foundry Voice Live API so that when a patient calls, Genie can detect the question and respond in a natural voice in near-real time,” said Sidd Shah, Vice President of Strategy & Business Growth at healow. “The entire roundtrip is all happening in Voice Live API.” If a patient asks about information in their medical chart, Genie can also fetch data from their electronic health record (EHR) and provide answers. Read the full story here. “If we did multiple hops to go across different infrastructures, that would add up to a diminished patient experience. The Azure AI Foundry Voice Live API is integrated into one single, unified solution, delivering speech-to-text and text-to-speech in the same infrastructure.” Bhawna Batra, VP of Engineering at eClinicalWorks Capgemini, a global business and technology transformation partner, is reimagining its global service desk managed operations through its Capgemini Cloud Infrastructure Services (CIS) division. The first phase covers 500,000 users across 45 clients, which is only part of the overall deployment base. The goal is to modernize the service desk to meet changing expectations for speed, personalization, and scale. To drive this transformation, Capgemini launched the “AI-Powered Service Desk” platform powered by Microsoft technologies including Dynamics 365 Contact Center, Copilot Studio, and Azure AI Foundry. A key enhancement was the integration of Voice Live API for real-time voice interactions, enabling intelligent, conversational support across telephony channels. The new platform delivers a more agile, truly conversational, AI-driven service experience, automating routine tasks and enhancing agent productivity. With scalable voice capabilities and deep integration across Microsoft’s ecosystem, Capgemini is positioned to streamline support operations, reduce response times, and elevate customer satisfaction across its enterprise client base. "Integrating Microsoft’s Voice Live API into our platform has been transformative. We’re seeing measurable improvements in user engagement and satisfaction thanks to the API’s low-latency, high-quality voice interactions. As a result, we are able to deliver more natural and responsive experiences, which have been positively received by our customers.” Stephen Hilton, EVP Chief Operating Officer at CIS Capgemini Astra Tech, a fast-growing UAE-based technology group part of G42, is bringing Voice Live API to its flagship platform, botim, a fintech-first and AI-native platform. Eight out of 10 smartphone users in the UAE already rely on the app. The company is now reshaping botim from a communications tool into a fintech-first service, adding features such as digital wallets, international remittances, and micro-loans. To achieve its broader vision, Astra Tech set out to make botim simpler, more intuitive, and more human. “Voice removes a lot of complexity, and it’s the most natural way to interact,” says Frenando Ansari, Lead Product Manager at Astra Tech. “For users with low digital literacy or language barriers, tapping through traditional interfaces can be difficult. Voice personalizes the experience and makes it accessible in their preferred language.” " The Voice Live API acts as a connective tissue for AI-driven conversation across every layer of the app. It gives us a standardized framework so that different product teams can incorporate voice without needing to hire deep AI expertise.” Frenando Ansari, Lead Product Manager at Astra Tech “The most impressive thing about the Voice Live API is the voice activity detection and the noise control algorithm.” Meng Wang, AI Head at Astra Tech Get started Voice Live API is transforming how developers build voice-enabled agent systems by providing an integrated, scalable, and efficient solution. By combining speech recognition, generative AI, and text-to-speech functionalities into a unified interface, it addresses the challenges of traditional implementations, enabling faster development and superior user experiences. From streamlining customer service to enhancing education and public services, the opportunities are endless. The future of voice-first solutions is here—let’s build it together! Voice Live API introduction (video) Try Voice Live in Azure AI Foundry Voice Live API documents Voice Live quickstart Voice Live Agent code sample in GitHub
2KViews2likes0Comments