azure ai agent service
21 TopicsThe Future of AI: Vibe Code with Adaptive Custom Translation
This blog explores how vibe coding—a conversational, flow-based development approach—was used to build the AdaptCT playground in Azure AI Foundry. It walks through setting up a productive coding environment with GitHub Copilot in Visual Studio Code, configuring the Copilot agent, and building a translation playground using Adaptive Custom Translation (AdaptCT). The post includes real-world code examples, architectural insights, and advanced UI patterns. It also highlights how AdaptCT fine-tunes LLM outputs using domain-specific reference sentence pairs, enabling more accurate and context-aware translations. The blog concludes with best practices for vibe coding teams and a forward-looking view of AI-augmented development paradigms.274Views0likes0CommentsBeyond Prompts: How Agentic AI is Redefining Human-AI Collaboration
The Shift from Reactive to Proactive AI As a passionate innovator in AI education, I’m on a mission to reimagine how we learn and build with AI—looking to craft intelligent agents that move beyond simple prompts to think, plan, and collaborate dynamically. Traditional AI systems rely heavily on prompt-based interactions—you ask a question, and the model responds. These systems are reactive, limited to single-turn tasks, and lack the ability to plan or adapt. This becomes a bottleneck in dynamic environments where tasks require multi-step reasoning, memory, and autonomy. Agentic AI changes the game. An agent is a structured system that uses a looped process to: Think – analyze inputs, reason about tasks, and plan actions. Act – choose and execute tools to complete tasks. Learn – optionally adapt based on feedback or outcomes. Unlike static workflows, agentic systems can: Make autonomous decisions Adapt to changing environments Collaborate with humans or other agents This shift enables AI to move from being a passive assistant to an active collaborator—capable of solving complex problems with minimal human intervention. What Is Agentic AI? Agentic AI refers to AI systems that go beyond static responses—they can reason, plan, act, and adapt autonomously. These agents operate in dynamic environments, making decisions and invoking tools to achieve goals with minimal human intervention. Some of the frameworks that can be used for Agentic AI include LangChain, Semantic Kernel, AutoGen, Crew AI, MetaGPT, etc. The frameworks can use Azure OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Hugging Face Transformers, etc. Key Traits of Agentic AI Autonomy Agents can independently decide what actions to take based on context and goals. Unlike assistants, which support users, agents' complete tasks and drive outcomes. Memory Agents can retain both long-term and short-term context. This enables personalized and context-aware interactions across sessions. Planning Semantic Kernel agents use function calling to plan multi-step tasks. The AI can iteratively invoke functions, analyze results, and adjust its strategy—automating complex workflows. Adaptability Agents dynamically adjust their behavior based on user input, environmental changes, or feedback. This makes them suitable for real-world applications like task management, learning assistants, or research copilots. Frameworks That Enable Agentic AI Semantic Kernel: A flexible framework for building agents with skills, memory, and orchestration. Supports plugins, planning, and multi-agent collaboration. More information here: Semantic Kernel Agent Architecture. Azure AI Foundry: A managed platform for deploying secure, scalable agents with built-in governance and tool integration. More information here: Exploring the Semantic Kernel Azure AI Agent. LangGraph: A JavaScript-compatible SDK for building agentic apps with memory and tool-calling capabilities, ideal for web-based applications. More information here: Agentic app with LangGraph or Azure AI Foundry (Node.js) - Azure App Service. Copilot Studio: A low-code platform to build custom copilots and agentic workflows using generative AI, plugins, and orchestration. Ideal for enterprise-grade conversational agents. More information here: Building your own copilot with Copilot Studio. Microsoft 365 Copilot: Embeds agentic capabilities directly into productivity apps like Word, Excel, and Teams—enabling contextual, multi-step assistance across workflows. More information here: What is Microsoft 365 Copilot? Why It Matters: Real-World Impact Traditional Generative AI is like a calculator—you input a question, and it gives you an answer. It’s reactive, single-turn, and lacks context. While useful for quick tasks, it struggles with complexity, personalization, and continuity. Agentic AI, on the other hand, is like a smart teammate. It can: Understand goals Plan multi-step actions Remember past interactions Adapt to changing needs Generative AI vs. Agentic Systems Feature Generative AI Agentic AI Interaction Style One-shot responses Multi-turn, goal-driven Context Awareness Limited Persistent memory Task Execution Static Dynamic and autonomous Adaptability Low High (based on feedback/input) How Agentic AI Works — Agentic AI for Students Example Imagine a student named Alice preparing for her final exams. She uses a Smart Study Assistant powered by Agentic AI. Here's how the agent works behind the scenes: Skills / Functions These are the actions or the callable units of logic the agent can invoke to perform. The assistant has functions like: Summarize lecture notes Generate quiz questions Search academic papers Schedule study sessions Think of these as plug-and-play capabilities the agent can call when needed. Memory The agent remembers Alice’s: Past quiz scores Topics she struggled with Preferred study times This helps the assistant personalize recommendations and avoid repeating content she already knows. Planner Instead of doing everything at once, the agent: Breaks down Alice’s goal (“prepare for exams”) into steps Plans a week-by-week study schedule Decides which skills/functions to use at each stage It’s like having a tutor who builds a custom roadmap. Orchestrator This is the brain that coordinates everything. It decides when to use memory, which function to call, and how to adjust the plan if Alice misses a study session or scores low on a quiz. It ensures the agent behaves intelligently and adapts in real time. Conclusion Agentic AI marks a pivotal shift in how we interact with intelligent systems—from passive assistants to proactive collaborators. As we move beyond prompts, we unlock new possibilities for autonomy, adaptability, and human-AI synergy. Whether you're a developer, educator, or strategist, understanding agentic frameworks is no longer optional - it’s foundational. Here are the high-level steps to get started with Agentic AI using only official Microsoft resources, each with a direct link to the relevant documentation: Get Started with Agentic AI Understand Agentic AI Concepts - Begin by learning the fundamentals of AI agents, their architecture, and use cases. See: Explore the basics in this Microsoft Learn module Set Up Your Azure Environment - Create an Azure account and ensure you have the necessary roles (e.g., Azure AI Account Owner or Contributor). See: Quickstart guide for Azure AI Foundry Agent Service Create Your First Agent in Azure AI Foundry - Use the Foundry portal to create a project and deploy a default agent. Customize it with instructions and test it in the playground. See: Step-by-step agent creation in Azure AI Foundry Build an Agentic Web App with Semantic Kernel or Foundry - Follow a hands-on tutorial to integrate agentic capabilities into a .NET web app using Semantic Kernel or Azure AI Foundry. See: Tutorial: Build an agentic app with Semantic Kernel or Foundry Deploy and Test Your Agent - Use GitHub Codespaces or Azure Developer CLI to deploy your app and connect it to your agent. Validate functionality using OpenAPI tools and the agent playground. See: Deploy and test your agentic app For Further Learning: Develop generative AI apps with Azure OpenAI and Semantic Kernel Agentic app with Semantic Kernel or Azure AI Foundry (.NET) - Azure App Service AI Agent Orchestration Patterns - Azure Architecture Center Configuring Agents with Semantic Kernel Plugins Workflows with AI Agents and Models - Azure Logic Apps About the author: I'm Juliet Rajan, a Lead Technical Trainer and passionate innovator in AI education. I specialize in crafting gamified, visionary learning experiences and building intelligent agents that go beyond traditional prompt-based systems. My recent work explores agentic AI, autonomous copilots, and dynamic human-AI collaboration using platforms like Azure AI Foundry and Semantic Kernel.672Views6likes2CommentsIntegrate Custom Azure AI Agents with CoPilot Studio and M365 CoPilot
Integrating Custom Agents with Copilot Studio and M365 Copilot In today's fast-paced digital world, integrating custom agents with Copilot Studio and M365 Copilot can significantly enhance your company's digital presence and extend your CoPilot platform to your enterprise applications and data. This blog will guide you through the integration steps of bringing your custom Azure AI Agent Service within an Azure Function App, into a Copilot Studio solution and publishing it to M365 and Teams Applications. When Might This Be Necessary: Integrating custom agents with Copilot Studio and M365 Copilot is necessary when you want to extend customization to automate tasks, streamline processes, and provide better user experience for your end-users. This integration is particularly useful for organizations looking to streamline their AI Platform, extend out-of-the-box functionality, and leverage existing enterprise data and applications to optimize their operations. Custom agents built on Azure allow you to achieve greater customization and flexibility than using Copilot Studio agents alone. What You Will Need: To get started, you will need the following: Azure AI Foundry Azure OpenAI Service Copilot Studio Developer License Microsoft Teams Enterprise License M365 Copilot License Steps to Integrate Custom Agents: Create a Project in Azure AI Foundry: Navigate to Azure AI Foundry and create a project. Select 'Agents' from the 'Build and Customize' menu pane on the left side of the screen and click the blue button to create a new agent. Customize Your Agent: Your agent will automatically be assigned an Agent ID. Give your agent a name and assign the model your agent will use. Customize your agent with instructions: Add your knowledge source: You can connect to Azure AI Search, load files directly to your agent, link to Microsoft Fabric, or connect to third-party sources like Tripadvisor. In our example, we are only testing the CoPilot integration steps of the AI Agent, so we did not build out additional options of providing grounding knowledge or function calling here. Test Your Agent: Once you have created your agent, test it in the playground. If you are happy with it, you are ready to call the agent in an Azure Function. Create and Publish an Azure Function: Use the sample function code from the GitHub repository to call the Azure AI Project and Agent. Publish your Azure Function to make it available for integration. azure-ai-foundry-agent/function_app.py at main · azure-data-ai-hub/azure-ai-foundry-agent Connect your AI Agent to your Function: update the "AIProjectConnString" value to include your Project connection string from the project overview page of in the AI Foundry. Role Based Access Controls: We have to add a role for the function app on OpenAI service. Role-based access control for Azure OpenAI - Azure AI services | Microsoft Learn Enable Managed Identity on the Function App Grant "Cognitive Services OpenAI Contributor" role to the System-assigned managed identity to the Function App in the Azure OpenAI resource Grant "Azure AI Developer" role to the System-assigned managed identity for your Function App in the Azure AI Project resource from the AI Foundry Build a Flow in Power Platform: Before you begin, make sure you are working in the same environment you will use to create your CoPilot Studio agent. To get started, navigate to the Power Platform (https://make.powerapps.com) to build out a flow that connects your Copilot Studio solution to your Azure Function App. When creating a new flow, select 'Build an instant cloud flow' and trigger the flow using 'Run a flow from Copilot'. Add an HTTP action to call the Function using the URL and pass the message prompt from the end user with your URL. The output of your function is plain text, so you can pass the response from your Azure AI Agent directly to your Copilot Studio solution. Create Your Copilot Studio Agent: Navigate to Microsoft Copilot Studio and select 'Agents', then 'New Agent'. Make sure you are in the same environment you used to create your cloud flow. Now select ‘Create’ button at the top of the screen From the top menu, navigate to ‘Topics’ and ‘System’. We will open up the ‘Conversation boosting’ topic. When you first open the Conversation boosting topic, you will see a template of connected nodes. Delete all but the initial ‘Trigger’ node. Now we will rebuild the conversation boosting agent to call the Flow you built in the previous step. Select 'Add an Action' and then select the option for existing Power Automate flow. Pass the response from your Custom Agent to the end user and end the current topic. My existing Cloud Flow: Add action to connect to existing Cloud Flow: When this menu pops up, you should see the option to Run the flow you created. Here, mine does not have a very unique name, but you see my flow 'Run a flow from Copilot' as a Basic action menu item. If you do not see your cloud flow here add the flow to the default solution in the environment. Go to Solutions > select the All pill > Default Solution > then add the Cloud Flow you created to the solution. Then go back to Copilot Studio, refresh and the flow will be listed there. Now complete building out the conversation boosting topic: Make Agent Available in M365 Copilot: Navigate to the 'Channels' menu and select 'Teams + Microsoft 365'. Be sure to select the box to 'Make agent available in M365 Copilot'. Save and re-publish your Copilot Agent. It may take up to 24 hours for the Copilot Agent to appear in M365 Teams agents list. Once it has loaded, select the 'Get Agents' option from the side menu of Copilot and pin your Copilot Studio Agent to your featured agent list Now, you can chat with your custom Azure AI Agent, directly from M365 Copilot! Conclusion: By following these steps, you can successfully integrate custom Azure AI Agents with Copilot Studio and M365 Copilot, enhancing you’re the utility of your existing platform and improving operational efficiency. This integration allows you to automate tasks, streamline processes, and provide better user experience for your end-users. Give it a try! Curious of how to bring custom models from your AI Foundry to your CoPilot Studio solutions? Check out this blog14KViews2likes10CommentsThe Future of AI: Optimize Your Site for Agents - It's Cool to be a Tool
Learn how to optimize your website for AI agents like Manus using NLWeb, MCP, structured data, and agent-responsive design. Discover best practices to improve discoverability, usability, and natural language access for autonomous assistants in the evolving agentic web.1.5KViews0likes0CommentsThe Future of AI: Harnessing AI agents for Customer Engagements
Discover how AI-powered agents are revolutionizing customer engagement—enhancing real-time support, automating workflows, and empowering human professionals with intelligent orchestration. Explore the future of AI-driven service, including Customer Assist created with Azure AI Foundry.679Views2likes0CommentsThe Future of AI: Autonomous Agents for Identifying the Root Cause of Cloud Service Incidents
Discover how Microsoft is transforming cloud service incident management with autonomous AI agents. Learn how AI-enhanced troubleshooting guides and agentic workflows are reducing downtime and empowering on-call engineers.2.2KViews3likes1CommentBuild recap: new Azure AI Foundry resource, Developer APIs and Tools
At Microsoft Build 2025, we introduced Azure AI Foundry resource, Azure AI Foundry API, and supporting tools to streamline the end-to-end development lifecycle of AI agents and applications. These capabilities are designed to help developers accelerate time-to-market; support production-scale workloads with scale and central governance; and support administrators with a self-serve capability to enable their teams’ experimentation with AI in a controlled environment. The Azure AI Foundry resource type unifies agents, models and tools under a single management grouping, equipped with built-in enterprise-readiness capabilities — such as tracing & monitoring, agent and model-specific evaluation capabilities, and customizable enterprise setup configurations tailored to your organizational policies like using your own virtual networks. This launch represents our commitment to providing organizations with a consistent, efficient and centrally governable environment for building and operating the AI agents and applications of today, and tomorrow. New platform capabilities The new Foundry resource type evolves our vision for Azure AI Foundry as a unified Azure platform-as-a-service offering, enabling developers to focus on building applications rather than managing infrastructure, while taking advantage of native Azure platform capabilities like Azure Data and Microsoft Defender. Previously, Azure AI Foundry portal’s capabilities required the management of multiple Azure resources and SDKs to build an end-to-end application. New capabilities include: Foundry resource type enables administrators with a consistent way of managing security and access to Agents, Models, Projects, and Azure tooling Integration. With this change, Azure Role Based Access Control, Networking and Policies are administered under a single Azure resource provider namespace, for streamlined management. ‘Azure AI Foundry’ is a renaming of the former ‘Azure AI Services’ resource type, with access to new capabilities. While Azure AI Foundry still supports bring-your-own Azure resources, we now default to a fully Microsoft-managed experience, making it faster and easier to get started. Foundry projects are folders that enable developers to independently create new environments for exploring new ideas and building prototypes, while managing data in isolation. Projects are child resources; they may be assigned their own admin controls but by default share common settings such as networking or connected resource access from their parent resource. This principle aims to take IT admins out of the day-to-day loop once security and governance are established at the resource level, enabling developers to self-serve confidently within their projects. Azure AI Foundry API is designed from the ground up, to build and evaluate API-first agentic applications, and lets you work across model providers agnostically with a consistent contract. Azure AI Foundry SDK wraps the Foundry API making it easy to integrate capabilities into code whether your application is built in Python, C#, JavaScript/TypeScript or Java. Azure AI Foundry for VS Code Extension complements your workflow with capabilities to help you explore models, and develop agents and is now supported with the new Foundry project type. New built-in RBAC roles provide up-to-date role definitions to help admins differentiate access between Administrator, Project Manager and Project users. Foundry RBAC actions follow strict control- and data plane separation, making it easier to implement the principle of least privilege. Why we built these new platform capabilities If you are already building with Azure AI Foundry -- these capabilities are meant to simplify platform management, enhance workflows that span multiple models and tools, and reinforce governance capabilities, as we see AI workloads grow more complex. The emergence of generative AI fundamentally changed how customers build AI solutions, requiring capabilities that span multiple traditional domains. We launched Azure AI Foundry to provide a comprehensive toolkit for exploring, building and evaluating this new wave of GenAI solutions. Initially, this experience was backed by two core Azure services -- Azure AI Services for accessing models including those from OpenAI, and Azure Machine Learning’s hub, to access tools for orchestration and customization. With the emergence of AI agents composing models and tools; and production workloads demanding the enforcement of central governance across those, we are investing to bring the management of agents, models and their tooling integration layer together to best serve these workload’s requirements. The Azure AI Foundry resource and Foundry API are purposefully designed to unify and simplify the composition and management of core building blocks of AI applications: Models Agents & their tools Observability, Security, and Trust In this new era of AI, there is no one-size-fits-all approach to building AI agents and applications. That's why we designed the new platform as a comprehensive AI factory with modular, extensible, and interoperable components. Foundry Project vs Hub-Based Project Going forward, new agents and model-centric capabilities will only land on the new Foundry project type. This includes access to Foundry Agent Service in GA and Foundry API. While we are transitioning to Azure AI Foundry as a managed platform service, hub-based project type remains accessible in Azure AI Foundry portal for GenAI capabilities that are not yet supported by the new resource type. Hub-based projects will continue to support use cases for custom model training in Azure Machine Learning Studio, CLI and SDK. For a full overview of capabilities supported by each project type, see this support matrix. Azure AI Foundry Agent Service The Azure AI Foundry Agent Service experience, now generally available, is powered by the new Foundry project. Existing customers exploring the GA experience will need the new AI Foundry resource. All new investments in the Azure AI Foundry Agent Service are focused on the Foundry project experience. Foundry projects act as secure units of isolation and collaboration — agents within a project share: File storage Thread storage (i.e. conversation history) Search indexes You can also bring your own Azure resources (e.g., storage, bring-your-own virtual network) to support compliance and control over sensitive data. Start Building with Foundry Azure AI Foundry is your foundation for scalable, secure, and production-grade AI development. Whether you're building your first agent or deploying a multi-agent workforce at Scale, Azure AI Foundry is ready for what’s next.3.6KViews2likes0CommentsThe Future of AI: Developing Code Assist – a Multi-Agent Tool
Discover how Code Assist, created with Azure AI Foundry Agent Service, uses AI agents to automate code documentation, generate business-ready slides, and detect security risks in large codebases—boosting developer productivity and project clarity.1.1KViews2likes1CommentTransforming Customer Support with Azure OpenAI, Azure AI Services, and Voice AI Agents
Customer support today is under immense pressure to meet the rising expectations of speed, personalization, and always-on availability. Yet, businesses still struggle with 1. Long wait times and call center 2. queues 3. Disconnected support channels 4. Limited availability of agents outside business hours 5. Repetitive issues consuming valuable human time 6. Frustrated users due to lack of immediate and contextual answers These inefficiencies are costing businesses over $3.7 trillion annually in poor service delivery, while over 70% of agents (based on the research) spend excessive time searching for the right answers instead of resolving problems directly How Voice AI Agents Are Transforming the Support Experience Enter the era of voice-enabled AI agents—powered by Azure OpenAI, Azure AI Services, and ServiceNow—designed to completely transform the way customers engage with support systems. These agents can now: Handle complex user queries in natural language Access enterprise systems (like CRM, ITSM, HR) in real-time Automate repetitive tasks such as password resets, ticket status updates, or return tracking Escalate only when human assistance is truly needed Create connected, seamless, and intelligent support experiences across departments Let’s take a closer look at four architecture patterns that showcase how enterprises can deploy these agents effectively. 🔷 Architecture Pattern 1: Unified Voice Agent with Azure AI + ServiceNow + CRM Integration In this architecture, the customer support journey begins when a user initiates a voice-based conversation through a front-end interface such as a web application, mobile app, or smart device. The captured audio is streamed directly to Azure OpenAI GPT-4o's real-time API, which performs immediate speech-to-text transcription, interprets the intent behind the request, and prepares the initial system response—all in a single seamless stream. Once the user’s intent is understood (e.g., "create a ticket", "check incident status", or "list recent issues"), GPT-4o passes control to Semantic Kernel, which orchestrates the next steps through function calling. Semantic Kernel hosts pre-defined tools (functions) that map to ServiceNow API actions, such as createIncident, getIncidentStatus, listIncidents, or searchKnowledgeBase. These function calls are then securely routed to ServiceNow via REST APIs. ServiceNow executes the appropriate actions—whether it's creating a new support ticket, retrieving the status of an open incident, or searching its Knowledge Base. CRM data is also seamlessly accessed, if needed, to enrich responses with personalized context such as customer history or case metadata. The result from ServiceNow (e.g., an incident ID or KB article summary) is then sent back to Azure GPT-4o, which converts the structured data into a natural spoken response. This final audio output is delivered to the user in real time, completing the end-to-end conversational loop. Additionally, tools like Azure Monitor or Application Insights can be integrated to log telemetry, track usage trends, monitor latency, and analyze user satisfaction over time. This architecture enables organizations to streamline customer support operations, reduce wait times, and deliver natural, intelligent assistance across any channel—voice-first. 🔷 Architecture Pattern 2: Scalable Customer Support with Multi-Agent Voice Architecture This architecture introduces a modular and distributed agent-based design to deliver intelligent, scalable customer support through a voice interface. The process starts with the User Proxy Agent, which acts as the entry point for all user conversations. It captures voice input and forwards the request to the Master Agent, which serves as the brain of the architecture. The Master Agent, empowered with a large language model (LLM) and memory, interprets the intent behind the user's input and dynamically routes the request to the most appropriate domain-specific agent. These include specialized agents such as the Activation Agent, Root Agent, Sales Agent, or Technical Agent, each designed to handle specific workflows or business tasks. The Activation Agent connects to web services and handles provisioning or onboarding scenarios. The Root Agent taps into document search systems (like Azure Cognitive Search) to answer questions grounded in internal documentation. The Sales Agent is equipped with structured logic models (SLMs) and CRM access to retrieve sales-related data from backend databases. The Technical Agent is containerized via Docker and built to manage backend diagnostics, code-level issues, or infrastructure status—often connecting to systems like ServiceNow for real-time ITSM execution. Once the task is executed by the respective agent, results are passed back through the Master Agent and ultimately to the User Proxy Agent, which synthesizes the output into a voice response and delivers it to the user. The presence of shared memory between agents allows for maintaining context across multi-turn conversations, enabling complex, multi-step interactions (e.g., “Create a ticket, check the latest order status, and escalate it if unresolved.”) without breaking continuity. This architecture is ideal for enterprises looking to scale customer support horizontally, adding new agents without disrupting existing workflows. It enables parallelism, specialization, and real-time orchestration, providing faster resolutions while reducing the burden on human agents. Best suited for distributed support operations across IT, HR, sales, and field support—where task-specific intelligence and modular scale are critical. 🔷 Architecture Pattern 3: Customer Support Reinvented with Voice RAG + Azure AI + ServiceNow This architecture brings a cutting-edge twist to Retrieval-Augmented Generation (RAG) by enabling it through a Voice AI agent—creating a truly conversational experience grounded in enterprise knowledge. By combining Azure OpenAI models with the ServiceNow Knowledge Base, this pattern ensures accurate, voice-driven support for employees or customers in real time. The process begins when a user interacts with a voice-enabled interface—via phone, web, or embedded assistant. The Voice AI agent streams the audio to Azure OpenAI GPT-4o, which transcribes the voice input, understands the intent, and then triggers a RAG pipeline. Instead of relying solely on the model’s internal memory, the system performs a real-time query against the ServiceNow Product Knowledge Base, retrieving relevant knowledge articles, troubleshooting guides, or support workflows. These results are embedded directly into the prompt, creating an enriched context that is passed to the language model via Azure AI Foundry. The model then generates a natural, contextually accurate spoken response, which is converted back into audio and voiced to the user—creating a seamless end-to-end Voice RAG experience. This approach ensures that responses are not only conversational but also deeply grounded in trusted enterprise knowledge. Ideal for helpdesk automation, HR support, and IT troubleshooting—where users prefer speaking naturally and need verified, document-backed responses in real time. 🔷 Architecture Pattern 4: Conversational Customer Support with AI Avatars and Azure AI This architecture delivers rich, conversational experiences by integrating AI avatars, Azure AI, and ServiceNow to offer human-like, intelligent customer support across channels. It merges natural speech, facial expression, and enterprise data to create a highly engaging support assistant. The interaction begins when a user speaks with an AI avatar application, whether embedded in a web portal, mobile device, or kiosk. The voice is captured and processed through a speech-to-text pipeline, which feeds the Avatar Module and Live Discussions Engine to manage lip-sync, emotional tone, and turn-taking. Behind the scenes, the avatar is connected to Azure AI services, including Custom Neural Voice (CNV) and Azure OpenAI, which enable the avatar to understand intent and generate responses in natural, conversational language. Most critically, the system integrates directly with the ServiceNow platform. Through secure APIs, the avatar queries ServiceNow to: Retrieve case status updates Provide summaries of incident history Look up Knowledge Base articles Trigger incident creation if needed These ServiceNow results are then passed through the text-to-speech module, with support for multilingual voice synthesis, and rendered by the avatar using expressive animation. Responses are visually delivered as live or pre-rendered avatar videos, creating a truly interactive and personalized experience. This pattern not only answers basic questions but also surfaces dynamic enterprise data—turning the AI avatar into a frontline voice agent capable of real-time, connected support across IT, HR, or customer service domains. Best for branded digital experiences, frontline support stations, or HR/IT helpdesk automation where facial presence, empathy, and backend integration are essential. ✨ Closing Thoughts: The Future of Customer Support Is Here Customer expectations have evolved—and so must the way we deliver support. By combining the power of Azure OpenAI, Azure AI Services, and ServiceNow, we’re not just automating tasks—we’re reinventing how organizations connect with their users. Whether it's: A unified voice agent handling IT tickets and CRM queries, A multi-agent architecture scaling across departments, A voice-enabled RAG system delivering knowledge-grounded answers in real time, or A human-like AI avatar offering face-to-face support— These architectures are driving a new era of intelligent, conversational, and scalable customer service. 👉 Join us at the Microsoft Booth during ServiceNow Knowledge 2025 (starting May 6th) to experience these solutions live, explore the tech behind them, and imagine how they can transform your business. Let’s build the future of support—together.1.4KViews1like1Comment