Blog Post

Azure AI Foundry Blog
3 MIN READ

Interactive AI Avatars: Building Voice Agents with Azure Voice Live API

srikantan's avatar
srikantan
Icon for Microsoft rankMicrosoft
Oct 17, 2025

Azure Voice Live API combines GPT-realtime's intelligent voice processing with photorealistic avatars that respond instantly through WebRTC streamingβ€”bridging the gap between traditional chatbots and authentic human interaction. This article covers how to leverage this avatar technology, real-time audio processing, and seamless WebSocket-to-WebRTC architecture to create truly lifelike AI assistants.

Azure Voice Live API recently reached General Availability, marking a significant milestone in conversational AI technology. This unified API surface doesn't just enable speech-to-speech capabilities for AI agentsβ€”it revolutionizes the entire experience by streaming interactions through lifelike avatars.

Built on the powerful speech-to-speech capabilities of the GPT-4 Realtime model, Azure Voice Live API offers developers unprecedented flexibility:

- Out-of-the-box or custom avatars from Azure AI Services

- Wide range of neural voices, including Indic languages like the one featured in this demo

- Single API interface that handles both audio processing and avatar streaming

- Real-time responsiveness with sub-second latency

In this post, I'll walk you through building a retail e-commerce voice agent that demonstrates this technology. While this implementation focuses on retail apparel, the architecture is entirely generic and can be adapted to any domainβ€”healthcare, banking, education, or customer supportβ€”by simply changing the system prompt and implementing domain-specific tools integration.

The Challenge: Navigating Uncharted Territory

At the time of writing, documentation for implementing avatar features with Azure Voice Live API is minimal. The protocol-specific intricacies around avatar video streaming and the complex sequence of steps required to establish a live avatar connection were quite overwhelming.

This is where Agent mode in GitHub Copilot in Visual Studio Code proved extremely useful. Through iterative conversations with the AI agent, I successfully discovered the approach to implement avatar streaming without getting lost in low-level protocol details. Here's how different AI models contributed to this solution:

- Claude Sonnet 4.5: Rapidly architected the application structure, designing the hybrid WebSocket + WebRTC architecture with TypeScript/Vite frontend and FastAPI backend

- GPT-5-Codex (Preview): Instrumental in implementing the complex avatar streaming components, handling WebRTC peer connections, and managing the bidirectional audio flow

Architecture Overview: A Hybrid Approach

The architecture comprises of these components

🐳 Container Application Architecture

  1. Vite Server: Node.js-based development server that serves the React application. In development, it provides hot module replacement and proxies API calls to `FastAPI`. In production, the React app is built into static files served by FastAPI.
  2. FastAPI with ASGI: Python web framework running on `uvicorn ASGI server`. ASGI (Asynchronous Server Gateway Interface) enables handling multiple concurrent connections efficiently, crucial for WebSocket connections and real-time audio processing.

 πŸ€– AI & Voice Services Integration

  1. Azure Voice Live API: Primary service that manages the connection to GPT-4 Realtime Model, provides avatar video generation, neural text-to-speech, and WebSocket gateway functionality
  2. GPT-4 Realtime Model: Accessed through Azure Voice Live API for real-time audio processing, function calling, and intelligent conversation management

πŸ”„ Communication Flows

  1. Audio Flow: Browser β†’ WebSocket β†’ FastAPI β†’ WebSocket β†’ Azure Voice Live API β†’ GPT-4 Realtime Model
  2. Video Flow: Browser ↔ WebRTC Direct Connection ↔ Azure Voice Live API (bypasses backend for performance)
  3. Function Calls: GPT-4 Realtime (via Voice Live) β†’ FastAPI Tools β†’ Business APIs β†’ Response β†’ GPT-4 Realtime (via Voice Live)

πŸ€– Business process automation Workflows / RAG

  1. Shipment Logic App Agent: Analyzes orders, validates data, creates shipping labels, and updates tracking information
  2. Conversation Analysis Agent: Azure Logic App Reviews complete conversations, performs sentiment analysis, generates quality scores with justification, and stores insights for continuous improvement
  3. Knowledge Retrieval: Azure AI Search is used to reason over manuals and help respond to Customer queries on policies, products

The solution implements a hybrid architecture that leverages both WebSocket proxying and direct WebRTC connections for optimal performance. This design ensures the conversational audio flow remains manageable and secure through the backend, while the bandwidth-intensive avatar video streams directly to the browser for optimal performance.

 

 

The flow used in the Avatar communication:

```
Frontend                 FastAPI Backend           Azure Voice Live API
   β”‚                          β”‚                          β”‚
   β”‚ 1. Request Session       β”‚                          β”‚
   │─────────────────────────►│                          β”‚
   β”‚                          β”‚ 2. Create Session        β”‚
   β”‚                          │─────────────────────────►│
   β”‚                          β”‚                          β”‚
   β”‚                          β”‚ 3. Session Config        β”‚
   β”‚                          β”‚    (with avatar settings)β”‚
   β”‚                          │─────────────────────────►│
   β”‚                          β”‚                          β”‚
   β”‚                          β”‚ 4. session.updated       β”‚
   β”‚                          β”‚    (ICE servers)         β”‚
   β”‚ 5. ICE servers           │◄─────────────────────────│
   │◄─────────────────────────│                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 6. Click "Start Avatar"  β”‚                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 7. Create RTCPeerConn    β”‚                          β”‚
   β”‚    with ICE servers      β”‚                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 8. Generate SDP Offer    β”‚                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 9. POST /avatar-offer    β”‚                          β”‚
   │─────────────────────────►│                          β”‚
   β”‚                          β”‚ 10. Encode & Send SDP    β”‚
   β”‚                          │─────────────────────────►│
   β”‚                          β”‚                          β”‚
   β”‚                          β”‚ 11. session.avatar.      β”‚
   β”‚                          β”‚     connecting           β”‚
   β”‚                          β”‚     (SDP answer)         β”‚
   β”‚ 12. SDP Answer           │◄─────────────────────────│
   │◄─────────────────────────│                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 13. setRemoteDescription β”‚                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 14. WebRTC Handshake     β”‚                          β”‚
   │◄─────────────────────────┼─────────────────────────►│
   β”‚    (Direct Connection)   β”‚                          β”‚
   β”‚                          β”‚                          β”‚
   β”‚ 15. Video/Audio Stream   β”‚                          β”‚
   │◄────────────────────────────────────────────────────│
   β”‚    (Bypasses Backend)    β”‚                          β”‚
```

 

For more technical details, refer to the technical details behind the implementation, refer to the GitHub Repo shared in this post.

Here is a video of the demo of the application in action.

 

Updated Oct 17, 2025
Version 1.0
No CommentsBe the first to comment