speech
54 TopicsReal-Time Speech Intelligence for Global Scale: gpt-4o-transcribe-diarize in Azure AI Foundry
Voice is a natural interface for communication. Now, with the general availability of gpt-4o-transcribe-diarize, the new automatic speech recognition (ASR) model in Azure AI Foundry, transforming speech into actionable text is faster, smarter, and more accurate than ever. This launch marks a significant milestone in our mission to empower organizations with AI that delivers speed, accuracy, and enterprise-grade reliability. With gpt-4o-transcribe-diarize seamlessly integrated, businesses can unlock critical insights from conversations, instantly converting audio into text with ultra-low latency and outstanding accuracy across 100+ languages. Whether you're enhancing live event accessibility, analyzing customer interactions, or enabling intelligent voice-driven applications, gpt-4o-transcribe-diarize helps capture spoken word and leverages it for real-time decision-making. Experience how Azure AI’s innovation in speech technology is helping to redefine productivity and global reach, setting a new standard for audio intelligence in the enterprise landscape. Why gpt-4o-transcribe-diarize Matters Businesses today operate in a world where conversations drive decisions. From customer support calls to virtual meetings, audio data holds critical insights. Gpt-4o-transcribe-diarize unlocks these insights, converting speech to text with ultra-low latency and high accuracy across 100+ languages. Whether you’re captioning live events, analyzing call center interactions, or building voice-driven applications, gpt-4o-transcribe-diarize offers the opportunity to help your workflows be powered by real-time intelligence. Key Features Lightning-Fast Transcription: Convert 10 minutes of audio in ~15 seconds with our new Fast Transcription API. Global Language Coverage: Support for 100+ languages and dialects for inclusive, global experiences. Seamless Integration: Available in Azure AI Foundry with managed endpoints for easy deployment and scale. Real-World Impact Imagine a reporter summarizing interviews in real time, a financial institution transcribing calls instantly, or a global retailer powering multilingual voice assistants; all with the speed and security of Azure AI Foundry. gpt-4o-transcribe-diarize can make these scenarios possible today. Pricing and regional availability for gpt-4o-transcribe-diarize Model Deployment Regions Price $/1m tokens gpt-4o-transcribe-diarize Global Standard (Paygo) East US 2, Sweden Central Text input: $2.50 Audio input: $6.00 Output: $10.00 gpt-4o-transcribe-diarize in audio AI innovation context gpt-4o-transcribe-diarize is part of a broader wave of audio AI innovation on Azure, joining new models like OpenAI gpt-realtime and gpt-audio that are purpose-built for expressive, low-latency voice experiences. While gpt-4o-transcribe-diarize delivers ultra-fast transcription with enterprise-grade accuracy, gpt-realtime enables natural, emotionally rich voice interactions with millisecond responsiveness—ideal for live conversations, voice agents, and multimodal applications. Meanwhile, audio models like gpt-4o-transcribe mini, and mini-tts extend the platform’s capabilities with customizable speech synthesis and real-time captioning, making Azure AI a comprehensive solution for building intelligent, production-ready voice systems. gpt-realtime Features OpenAI claims the gpt-realtime model introduces a new standard for voice-first applications, combining expressive audio generation with ultra-low latency and natural conversational flow. It’s designed to power real-time interactions that feel like natural, responsive speech. Key Features: Millisecond Latency: Enables live responsiveness suitable for real-time conversations, kiosks, and voice agents. Emotionally Expressive Voices: Supports nuanced speech delivery with voices like Marin and Cedar, capable of conveying tone, emotion, and intent. Natural Turn-Taking: Built-in mechanisms for detecting pauses and transitions, allowing fluid back-and-forth dialogue. Function Calling Support: Seamlessly integrates with backend systems to trigger actions based on voice input. Multimodal Readiness: Designed to work with text, audio, and visual inputs for rich, interactive experiences. Stable APIs for Production: Enterprise-grade reliability with consistent behavior across sessions and deployments. These features make gpt-realtime a foundational model for building intelligent voice interfaces that go beyond transcription—delivering conversational intelligence in real time. gpt-realtime Use Cases With its expressive audio capabilities and real-time responsiveness, gpt-realtime unlocks new possibilities across industries. Whether enhancing customer engagement or streamlining operations, it brings voice AI into the heart of enterprise workflows. Examples include: Customer Service Agents: Power virtual agents that respond instantly with natural, tones for rich expressiveness, improving customer satisfaction and reducing wait times. Retail Kiosks & Smart Devices: Enable voice-driven product discovery, troubleshooting, and checkout experiences with real-time feedback. Multilingual Voice Assistants: Deliver localized, expressive voice experiences across global markets with support for multiple languages and dialects. Live Captioning & Accessibility: Combine gpt-4o-transcribe-diarize gpt-realtime to provide real-time captions and voice synthesis for inclusive experiences. These use cases demonstrate how gpt-realtime transforms voice into a strategic interface—bridging human communication and intelligent systems with speed and accuracy. Ready to transform voice into value? Learn more and start building with gpt-4o-transcribe-diarize1.2KViews0likes0CommentsUsing the Voice Live API in Azure AI Foundry
In this blog post, we’ll explore the Voice Live API from Azure AI Foundry. Officially released for general availability on October 1, 2025, this API unifies speech recognition, generative AI, and text-to-speech capabilities into a single, streamlined interface. It removes the complexity of manually orchestrating multiple components and ensures a consistent developer experience across all models, making it easy to switch and experiment. What sets Voice Live API apart are its advanced conversational enhancements, including: Semantic Voice Activity Detection (VAD) that’s robust against background noise and accurately detects when a user intends to speak. Semantic end-of-turn detection that supports natural pauses in conversation. Server-side audio processing features like noise suppression and echo cancellation, simplifying client-side development. Let’s get started. 1. Getting Started with Voice Live API The Voice Live API ships with an SDKthat lets you open a single realtime WebSocket connection and then do everything—stream microphone audio up, receive synthesized audio/text/function‑call events down— without writing any of the low-level networking plumbing. This is how the connection is opened with the Python SDK. from azure.ai.voicelive.aio import connect from azure.core.credentials import AzureKeyCredential async with connect( endpoint=VOICE_LIVE_ENDPOINT, # https://<your-foundry-resource>.cognitiveservices.azure.com/ credential=AzureKeyCredential(VOICE_LIVE_KEY), model="gpt-4o-realtime", connection_options={ "max_msg_size": 10 * 1024 * 1024, # allow streamed PCM "heartbeat": 20, # keep socket alive "timeout": 20, # network resilience }, ) as connection: Notice that you don't need an underlying deployment nor manage any generative AI models, as the API handles all the underlying infrastructure. Immediately after connecting, declare what kind of conversation you want. This is where you “teach” the session the model instructions, which voice to synthesize, what tool functions it may call, and how to detect speech turns: from azure.ai.voicelive.models import ( RequestSession, Modality, AzureStandardVoice, InputAudioFormat, OutputAudioFormat, AzureSemanticVad, ToolChoiceLiteral, AudioInputTranscriptionOptions ) session_config = RequestSession( modalities=[Modality.TEXT, Modality.AUDIO], instructions="Assist the user with account questions succinctly.", voice=AzureStandardVoice(name="alloy", type="azure-standard"), input_audio_format=InputAudioFormat.PCM16, output_audio_format=OutputAudioFormat.PCM16, turn_detection=AzureSemanticVad( threshold=0.5, prefix_padding_ms=300, silence_duration_ms=500 ), tools=[ # optional { "name": "get_user_information", "description": "Retrieve profile and limits for a user", "input_schema": { "type": "object", "properties": {"user_id": {"type": "string"}}, "required": ["user_id"] } } ], tool_choice=ToolChoiceLiteral.AUTO, input_audio_transcription=AudioInputTranscriptionOptions(model="whisper-1"), ) await connection.session.update(session=session_config) After session setup, it is pure event-driven flow: async for event in connection: if event.type == ServerEventType.RESPONSE_AUDIO_DELTA: playback_queue.put(event.delta) elif event.type == ServerEventType.CONVERSATION_ITEM_CREATED and event.item.type == ItemType.FUNCTION_CALL: handle_function_call(event) That’s the core: one connection, one session config message, then pure event-driven flow. 2. Deep Dive: Tool (Function) Handling in the Voice Live SDK In the Voice Live context, “tools” are model-invocable functions you expose with a JSON schema. The SDK streams a structured function call request (name + incrementally streamed arguments), you execute real code locally, then feed the JSON result back so the model can incorporate it into its next spoken (and/or textual) turn. Let’s unpack the full lifecycle. First, the model emits a CONVERSATION_ITEM_CREATED event whose item.type == FUNCTION_CALL if event.item.type == ItemType.FUNCTION_CALL: await self._handle_function_call_with_improved_pattern(event, connection) Arguments stream (possibly token-by-token) until the SDK signals RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE. Optionally, the SDK may also complete the “response” segment with RESPONSE_DONE before you run the tool. Then we execute the local Python function, and explicitly request a new model response via connection.response.create(), telling the model to incorporate the tool result into a natural-language (and audio) answer. async def _handle_function_call(self, created_evt, connection): call_item = created_evt.item # ResponseFunctionCallItem name = call_item.name call_id = call_item.call_id prev_id = call_item.id # 1. Wait until arguments are fully streamed args_done = await _wait_for_event( connection, {ServerEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE} ) assert args_done.call_id == call_id arguments = args_done.arguments # JSON string # 2. (Optional) Wait for RESPONSE_DONE to avoid race with model finishing segment await _wait_for_event(connection, {ServerEventType.RESPONSE_DONE}) # 3. Execute func = self.available_functions.get(name) if not func: # Optionally send an error function output return result = await func(arguments) # Implementations are async in this sample # 4. Send output output_item = FunctionCallOutputItem(call_id=call_id, output=json.dumps(result)) await connection.conversation.item.create( previous_item_id=prev_id, item=output_item ) # 5. Trigger follow-up model response await connection.response.create() 3. Sample App: Try the repo with sample app we have created, together with all the infrastructure required already automated. This sample app simulates a friendly real‑time contact‑center rep who can listen continuously, understand you as you speak, instantly look up things like your credit card’s upcoming due date or a product detail via function calls, and then answer back naturally in a Brazilian Portuguese neural voice with almost no lag. Behind the scenes it streams your microphone audio to Azure’s Voice Live (GPT‑4o realtime) model, transcribes and reasons on the fly, selectively triggers lightweight “get user information” or “get product information” lookups to Azure AI Search , and speaks responses right back to you. Happy Coding!300Views0likes0CommentsThe Future of AI: How Lovable.dev and Azure OpenAI Accelerate Apps that Change Lives
Discover how Charles Elwood, a Microsoft AI MVP and TEDx Speaker, leverages Lovable.dev and Azure OpenAI to create impactful AI solutions. From automating expense reports to restoring voices, translating gestures to speech, and visualizing public health data, Charles's innovations are transforming lives and democratizing technology. Follow his journey to learn more about AI for good.1.5KViews2likes0CommentsUpgrade your voice agent with Azure AI Voice Live API
Today, we are excited to announce the general availability of Voice Live API, which enables real-time speech-to-speech conversational experience through a unified API powered by generative AI models. With Voice Live API, developers can easily voice-enable any agent built with the Azure AI Foundry Agent Service. Azure AI Foundry Agent Service, enables the operation of agents that make decisions, invoke tools, and participate in workflows across development, deployment, and production. By eliminating the need to stitch together disparate components, Voice Live API offers a low latency, end-to-end solution for voice-driven experiences. As always, a diverse range of customers provided valuable feedback during the preview period. Along with announcing general availability, we are also taking this opportunity to address that feedback and improve the API. Following are some of the new features designed to assist developers and enterprises in building scalable, production-ready voice agents. More natively integrated GenAI models including GPT-Realtime Voice Live API enables developers to select from a range of advanced AI models designed for conversational applications, such as GPT-Realtime, GPT-5, GPT-4.1, Phi, and others. These models are natively supported and fully managed, eliminating the need for developers to manage model deployment or plan for capacity. These natively supported models may each have a distinct stage in their life cycle (e.g. public preview, generally available) and be subject to varying pricing structures. The table below lists the models supported in each pricing tier. Pricing Tier Generally Available In Public Preview Voice Live Pro GPT-Realtime, GPT-4.1, GPT-4o GPT-5 Voice Live Standard GPT-4o-mini, GPT-4.1-mini GPT-4o-Mini-Realtime, GPT-5-mini Voice Live Lite NA Phi-4-MM-Realtime, GPT-5-Nano, Phi-4-Mini Extended speech languages to 140+ Voice Live API now supports speech input in over 140 languages/locales. View all supported languages by configuration. Automatic multilingual configuration is enabled by default, using the multilingual model. Integrated with Custom Speech Developers need customization to better manage input and output for different use cases. Besides the support for Custom Voice released in May 2025, Voice Live now supports seamless integration with Custom Speech for improved speech recognition results. Developers can also improve speech input accuracy with phrase lists and refine speech synthesis pronunciation using custom lexicons, all without training a model. Learn how to customize speech and voice models for Voice Live API. Natural HD voices upgraded Neural HD voices in Azure AI Speech are contextually aware and engineered to provide a natural, expressive experience, making them ideal for voice agent applications. The latest V2 upgrade enhances lifelike qualities with features such as natural pauses, filler words, and seamless transitions between speaking styles, all available with Voice Live API. Check out the latest demo of Ava Neural HD V2. Improved VAD features for interruption detection Voice Live API now features semantic Voice Activity Detection (VAD), enabling it to intelligently recognize pauses and filler word interruptions in conversations. In the latest en-US evaluation on Multilingual filler words data, Voice Live API achieved ~20% relative improvement from previous VAD models. This leap in performance is powered by integrating semantic VAD into the n-best pipeline, allowing the system to better distinguish meaningful speech from filler noise and enabling more accurate latency tracking and cleaner segmentation, especially in multilingual and noisy environments. 4K avatar support Voice Live API enables efficient integration with streaming avatars. With the latest updates, avatar options offer support for high-fidelity 4K video models. Learn more about the avatar update. Improved function calling and integration with Azure AI Foundry Agent Service Voice Live API enables function calling to assist developers in building robust voice agents with their chosen generative AI models. This release improves asynchronous function calls and enhances integration with Azure AI Foundry Agent Service for agent creation and operation. Learn more about creating a voice live real-time voice agent with Azure AI Foundry Agent Service. More developer resources and availability in more regions Developer resources are available in C# and Python, with more to come. Get started with Voice Live API. Voice Live API is available in more regions now including Australia East, East US, Japan East, and UK South, besides the previously supported regions such as Central India, East US 2, South East Asia, Sweden Central, and West US 2. Check the features supported in each region. Customers adopting Voice Live In healthcare, patient experience is always the top priority. With Voice Live, eClinicalWorks’ healow Genie contact center solution is now taking healthcare modernization a step further. healow is piloting Voice Live API for Genie to inform patients about their upcoming appointments, answer common questions, and return voicemails. Reducing these routine calls saves healthcare staff hours each day and boosts patient satisfaction through timely interactions. “We’re looking forward to using Azure AI Foundry Voice Live API so that when a patient calls, Genie can detect the question and respond in a natural voice in near-real time,” said Sidd Shah, Vice President of Strategy & Business Growth at healow. “The entire roundtrip is all happening in Voice Live API.” If a patient asks about information in their medical chart, Genie can also fetch data from their electronic health record (EHR) and provide answers. Read the full story here. “If we did multiple hops to go across different infrastructures, that would add up to a diminished patient experience. The Azure AI Foundry Voice Live API is integrated into one single, unified solution, delivering speech-to-text and text-to-speech in the same infrastructure.” Bhawna Batra, VP of Engineering at eClinicalWorks Capgemini, a global business and technology transformation partner, is reimagining its global service desk managed operations through its Capgemini Cloud Infrastructure Services (CIS) division. The first phase covers 500,000 users across 45 clients, which is only part of the overall deployment base. The goal is to modernize the service desk to meet changing expectations for speed, personalization, and scale. To drive this transformation, Capgemini launched the “AI-Powered Service Desk” platform powered by Microsoft technologies including Dynamics 365 Contact Center, Copilot Studio, and Azure AI Foundry. A key enhancement was the integration of Voice Live API for real-time voice interactions, enabling intelligent, conversational support across telephony channels. The new platform delivers a more agile, truly conversational, AI-driven service experience, automating routine tasks and enhancing agent productivity. With scalable voice capabilities and deep integration across Microsoft’s ecosystem, Capgemini is positioned to streamline support operations, reduce response times, and elevate customer satisfaction across its enterprise client base. "Integrating Microsoft’s Voice Live API into our platform has been transformative. We’re seeing measurable improvements in user engagement and satisfaction thanks to the API’s low-latency, high-quality voice interactions. As a result, we are able to deliver more natural and responsive experiences, which have been positively received by our customers.” Stephen Hilton, EVP Chief Operating Officer at CIS Capgemini Astra Tech, a fast-growing UAE-based technology group part of G42, is bringing Voice Live API to its flagship platform, botim, a fintech-first and AI-native platform. Eight out of 10 smartphone users in the UAE already rely on the app. The company is now reshaping botim from a communications tool into a fintech-first service, adding features such as digital wallets, international remittances, and micro-loans. To achieve its broader vision, Astra Tech set out to make botim simpler, more intuitive, and more human. “Voice removes a lot of complexity, and it’s the most natural way to interact,” says Frenando Ansari, Lead Product Manager at Astra Tech. “For users with low digital literacy or language barriers, tapping through traditional interfaces can be difficult. Voice personalizes the experience and makes it accessible in their preferred language.” " The Voice Live API acts as a connective tissue for AI-driven conversation across every layer of the app. It gives us a standardized framework so that different product teams can incorporate voice without needing to hire deep AI expertise.” Frenando Ansari, Lead Product Manager at Astra Tech “The most impressive thing about the Voice Live API is the voice activity detection and the noise control algorithm.” Meng Wang, AI Head at Astra Tech Get started Voice Live API is transforming how developers build voice-enabled agent systems by providing an integrated, scalable, and efficient solution. By combining speech recognition, generative AI, and text-to-speech functionalities into a unified interface, it addresses the challenges of traditional implementations, enabling faster development and superior user experiences. From streamlining customer service to enhancing education and public services, the opportunities are endless. The future of voice-first solutions is here—let’s build it together! Voice Live API introduction (video) Try Voice Live in Azure AI Foundry Voice Live API documents Voice Live quickstart Voice Live Agent code sample in GitHub2KViews2likes0CommentsAnnouncing Live Interpreter API - Now in Public Preview
Today, we’re excited to introduce Live Interpreter –a breakthrough new capability in Azure Speech Translation – that makes real-time, multilingual communication effortless. Live Interpreter continuously identifies the language being spoken without requiring you to set an input language and delivers low latency speech-to-speech translation in a natural voice that preserves the speaker’s style and tone.5.9KViews1like0CommentsAnnouncing gpt-realtime on Azure AI Foundry:
We are thrilled to announce that we are releasing today the general availability of our latest advancement in speech-to-speech technology: gpt-realtime. This new model represents a significant leap forward in our commitment to providing advanced and reliable speech-to-speech solutions. gpt-realtime is a new S2S (speech-to-speech) model with improved instruction following, designed to merge all of our speech-to-speech improvements into a single, cohesive model. This model is now available in the Real-time API, offering enhanced voice naturalness, higher audio quality, and improved function calling capabilities. Key Features New, natural, expressive voices: New voice options (Marin and Cedar) that bring a new level of naturalness and clarity to speech synthesis. Improved Instruction Following: Enhanced capabilities to follow instructions more accurately and reliably. Enhanced Voice Naturalness: More lifelike and expressive voice output. Higher Audio Quality: Superior audio quality for a better user experience. Improved Function Calling: Enhanced ability to call custom code defined by developers. Image Input Support: Add images to context and discuss them via voice—no video required. Check out the model card here: gpt-realtime Pricing Pricing for gpt-realtime is 20% lower compared to the previous gpt-4o-realtime preview: Pricing is based on usage per 1 million tokens. Below is the breakdown: Getting Started gpt-realtime is available on Azure AI Foundry via Azure Models direct from Azure today. We are excited to see how developers and users will leverage these new capabilities to create innovative and impactful solutions. Check out the model on Azure AI Foundry and see detailed documentation in Microsoft Learn docs.4.2KViews1like0CommentsBuild recap: new Azure AI Foundry resource, Developer APIs and Tools
At Microsoft Build 2025, we introduced Azure AI Foundry resource, Azure AI Foundry API, and supporting tools to streamline the end-to-end development lifecycle of AI agents and applications. These capabilities are designed to help developers accelerate time-to-market; support production-scale workloads with scale and central governance; and support administrators with a self-serve capability to enable their teams’ experimentation with AI in a controlled environment. The Azure AI Foundry resource type unifies agents, models and tools under a single management grouping, equipped with built-in enterprise-readiness capabilities — such as tracing & monitoring, agent and model-specific evaluation capabilities, and customizable enterprise setup configurations tailored to your organizational policies like using your own virtual networks. This launch represents our commitment to providing organizations with a consistent, efficient and centrally governable environment for building and operating the AI agents and applications of today, and tomorrow. New platform capabilities The new Foundry resource type evolves our vision for Azure AI Foundry as a unified Azure platform-as-a-service offering, enabling developers to focus on building applications rather than managing infrastructure, while taking advantage of native Azure platform capabilities like Azure Data and Microsoft Defender. Previously, Azure AI Foundry portal’s capabilities required the management of multiple Azure resources and SDKs to build an end-to-end application. New capabilities include: Foundry resource type enables administrators with a consistent way of managing security and access to Agents, Models, Projects, and Azure tooling Integration. With this change, Azure Role Based Access Control, Networking and Policies are administered under a single Azure resource provider namespace, for streamlined management. ‘Azure AI Foundry’ is a renaming of the former ‘Azure AI Services’ resource type, with access to new capabilities. While Azure AI Foundry still supports bring-your-own Azure resources, we now default to a fully Microsoft-managed experience, making it faster and easier to get started. Foundry projects are folders that enable developers to independently create new environments for exploring new ideas and building prototypes, while managing data in isolation. Projects are child resources; they may be assigned their own admin controls but by default share common settings such as networking or connected resource access from their parent resource. This principle aims to take IT admins out of the day-to-day loop once security and governance are established at the resource level, enabling developers to self-serve confidently within their projects. Azure AI Foundry API is designed from the ground up, to build and evaluate API-first agentic applications, and lets you work across model providers agnostically with a consistent contract. Azure AI Foundry SDK wraps the Foundry API making it easy to integrate capabilities into code whether your application is built in Python, C#, JavaScript/TypeScript or Java. Azure AI Foundry for VS Code Extension complements your workflow with capabilities to help you explore models, and develop agents and is now supported with the new Foundry project type. New built-in RBAC roles provide up-to-date role definitions to help admins differentiate access between Administrator, Project Manager and Project users. Foundry RBAC actions follow strict control- and data plane separation, making it easier to implement the principle of least privilege. Why we built these new platform capabilities If you are already building with Azure AI Foundry -- these capabilities are meant to simplify platform management, enhance workflows that span multiple models and tools, and reinforce governance capabilities, as we see AI workloads grow more complex. The emergence of generative AI fundamentally changed how customers build AI solutions, requiring capabilities that span multiple traditional domains. We launched Azure AI Foundry to provide a comprehensive toolkit for exploring, building and evaluating this new wave of GenAI solutions. Initially, this experience was backed by two core Azure services -- Azure AI Services for accessing models including those from OpenAI, and Azure Machine Learning’s hub, to access tools for orchestration and customization. With the emergence of AI agents composing models and tools; and production workloads demanding the enforcement of central governance across those, we are investing to bring the management of agents, models and their tooling integration layer together to best serve these workload’s requirements. The Azure AI Foundry resource and Foundry API are purposefully designed to unify and simplify the composition and management of core building blocks of AI applications: Models Agents & their tools Observability, Security, and Trust In this new era of AI, there is no one-size-fits-all approach to building AI agents and applications. That's why we designed the new platform as a comprehensive AI factory with modular, extensible, and interoperable components. Foundry Project vs Hub-Based Project Going forward, new agents and model-centric capabilities will only land on the new Foundry project type. This includes access to Foundry Agent Service in GA and Foundry API. While we are transitioning to Azure AI Foundry as a managed platform service, hub-based project type remains accessible in Azure AI Foundry portal for GenAI capabilities that are not yet supported by the new resource type. Hub-based projects will continue to support use cases for custom model training in Azure Machine Learning Studio, CLI and SDK. For a full overview of capabilities supported by each project type, see this support matrix. Azure AI Foundry Agent Service The Azure AI Foundry Agent Service experience, now generally available, is powered by the new Foundry project. Existing customers exploring the GA experience will need the new AI Foundry resource. All new investments in the Azure AI Foundry Agent Service are focused on the Foundry project experience. Foundry projects act as secure units of isolation and collaboration — agents within a project share: File storage Thread storage (i.e. conversation history) Search indexes You can also bring your own Azure resources (e.g., storage, bring-your-own virtual network) to support compliance and control over sensitive data. Start Building with Foundry Azure AI Foundry is your foundation for scalable, secure, and production-grade AI development. Whether you're building your first agent or deploying a multi-agent workforce at Scale, Azure AI Foundry is ready for what’s next.4.1KViews2likes0CommentsAI Avatars: Redefining Human-Digital Interaction in the Enterprise Era
In today’s AI-driven world, businesses are constantly seeking innovative ways to humanize digital experiences. AI Avatars are emerging as a powerful solution—bridging the gap between intelligent automation and authentic, human-like engagement. With advancements in speech synthesis, large language models, and avatar rendering technologies, organizations can now deploy AI-powered digital assistants that not only understand and respond but also interact with a lifelike presence. The Rise of AI Avatars in Enterprise Applications AI Avatars go beyond traditional chatbots or voice assistants. These virtual beings offer multimodal interaction—combining voice, visual cues, and conversational intelligence into a seamless user experience. Built on enterprise-grade platforms like Azure AI, these avatars can be integrated into customer support portals, digital kiosks, internal knowledge hubs, and more. Their utility spans a range of industries: Retail: Personalized shopping assistants that guide consumers through products. Healthcare: Virtual health concierges that help patients navigate care. Education: Interactive tutors that deliver lessons with empathy and responsiveness. HR and Training: Onboarding avatars that answer employee questions, onboard new hires, or provide compliance updates. One of our key partners, Cloudforce, has integrated AI Avatar technology directly into their flagship platform nebulaONE®. This integration enables enterprises to deploy digital assistants that are deeply embedded in business processes, offering contextualized support and real-time engagement. From training and onboarding to employee self-service, nebulaONE's agentic AI Avatars act as a digital bridge between users and systems—driving efficiency, engagement, and satisfaction. Partner Spotlight: Cloudforce’s Avatar Initiative To operationalize and productize AI Avatars, Microsoft collaborates with a growing ecosystem of partners. Cloudforce is one of the early pioneers in this space. Their work in embedding avatars into nebulaONE demonstrates what’s possible when advanced AI meets real-world enterprise needs. With a vision to transform user interaction across industries, Cloudforce built a production-grade AI Avatar module designed to support customer Q&A, knowledge discovery, and live guided walkthroughs. Leveraging Azure OpenAI, Azure AI Speech, and privately-deployed secure cloud infrastructure, they have brought conversational intelligence to life—with both a face and a voice. Looking ahead, Cloudforce’s broader vision is to bring AI Avatar capabilities to millions of students—delivering immersive learning experiences that blend interactivity, personalization, and scale. Their education-focused roadmap enhancements highlight the potential of avatars not just as productivity agents, but as accessible and empathetic digital educators, delivering equitable access to knowledge previously reserved for a fortunate few. This kind of partner innovation illustrates how AI Avatars can be customized and scaled to deliver tangible business value across multiple domains. Partner Contribution "Students are already embracing generative AI at a pace and proficiency that far exceeds many professional audiences. With Azure's AI Avatar technology, educators and institutions can tailor unique GenAI interactions that promote reasoning and learning over simply receiving answers the way they would with common public bots." says Husein Sharaf, Founder and CEO at Cloudforce. "We understand the concerns and hesitation that our education partners are currently grappling with, however we believe they can and should take an active role in shaping how this transformative technology is leveraged across their campuses, or risk being left behind as students choose their own adventure." "Microsoft's enterprise AI capabilities are enabling partners like us to deliver secure, cost-efficient, and responsible AI experiences at scale. With the Azure AI Foundry and key innovations like AI Avatars as our building blocks, the nebulaONE platform is poised to serve as the GenAI gateway to tens of thousands of business users, and millions of students at leading educational institutions globally. Our customers are seeking unique differentiators that will enable them to compete and win in the age of AI, and our collaboration with Microsoft is empowering us to deliver just that." Summary AI Avatars represent the next frontier in digital interaction. By combining conversational AI, expressive voice synthesis, and realistic visual rendering, these intelligent agents deliver truly human-like experiences—at scale. They are not just tools, but digital extensions of your brand. Partners like Cloudforce are leading the way with innovative platforms like nebulaONE, showing how this technology can be embedded into enterprise solutions and educational experiences to drive efficiency with a human touch. While Cloudforce is among the first to productize AI Avatars using Azure AI, they are part of a growing movement—helping to shape the future of AI-powered experiences across industries. As AI continues to evolve, avatars will become a standard interface—transforming the way we learn, work, and engage with digital systems.1.9KViews7likes2CommentsProject Maria: Bringing Speech and Avatars Together for Next-Generation Customer Experiences
In an age where digital transformation influences nearly every aspect of business, companies are actively seeking innovative ways to differentiate their customer interactions. Traditional text-based chatbots, while helpful, often leave users wanting a more natural, personalized, and efficient experience. Imagine hosting a virtual brand ambassador—a digital twin of yourself or your organization’s spokesperson—capable of answering customer queries in real time with a lifelike voice and expressive 2D or 3D face. This is where Project Maria comes in. Project Maria is an internal Microsoft initiative that integrates cutting-edge speech-to-text (STT), text-to-speech (TTS), large language model and avatar technologies. Using Azure AI speech and custom neural voice models, it seeks to create immersive, personalized interactions for customers—reducing friction, increasing brand loyalty, and opening new business opportunities in areas such as customer support, product briefings, digital twins, live marketing events, safety briefings, and beyond. In this blog post, we will dive into: The Problem and Rationale for evolving beyond basic text-based solutions. Speech-to-Text (STT), Text-to-Speech (TTS) Pipelines, Azure OpenAI GPT-4o Real-Time API that power natural conversations. Avatar Models in Azure, including off-the-shelf 2D avatars and fully customized custom avatar Neural Voice Model Creation, from data gathering to training and deployment on Azure. Security and Compliance considerations for handling sensitive voice assets and data. Use Cases from customer support to digital brand ambassadors and safety briefings. Real-World Debut of Project Maria, showcased at the AI Leaders’ Summit in Seattle. Future Outlook on how custom avatar will reshape business interactions, scale presence, and streamline time-consuming tasks. If you’re developing or considering a neural (custom) voice + avatar models for your product or enterprise, this post will guide you through both conceptual and technical details to help you get started—and highlight where the field is heading next. 1. The Problem: Limitations of Text-Based Chatbots 1.1 Boredom and Fatigue in Text Interactions Text-based chatbots have come a long way, especially with the advent of powerful Large Language Models (LLMs) and Small Large Models (SLMs). Despite these innovations, interactions can still become tedious—often requiring users to spend significant personal time crafting the right questions. Many of us have experienced chatbots that respond with excessively verbose or repetitive messages, leading to boredom or even frustration. In industries that demand immediacy—like healthcare, finance, or real-time consumer support—purely text-based exchanges can feel slow and cumbersome. Moreover, text chat requires a user’s full attention to read and type, whether in a busy contact center environment or an internal knowledge base where employees juggle multiple tasks. 1.2 Desire for More Engaging and Efficient Modalities Today’s users expect something closer to human conversation. Devices ranging from smartphones to smart speakers and in-car infotainment systems have normalized voice-based interfaces. Adding an avatar—whether a 2D or 3D representation—deepens engagement by combining speech with a friendly visual persona. This can elevate brand identity: an avatar that looks, talks, and gestures like your company’s brand ambassador or a well-known subject-matter expert. 1.3 The Need for Scalability In a busy customer support environment, human representatives simply can’t handle an infinite volume of conversations or offer 24/7 coverage across multiple channels. Automation is essential, yet providing high-quality automated interactions remains challenging. While a text-based chatbot might handle routine queries, a voice-based, avatar-enabled agent can manage more complex requests with greater dynamism and personality. By giving your digital support assistant both a “face” and a voice aligned with your brand, you can foster deeper emotional connections and provide a more genuine, empathetic experience. This blend of automation and personalization scales your support operations, ensuring higher customer satisfaction while freeing human agents to focus on critical or specialized tasks. 2. The Vision: Project Maria’s Approach Project Maria addresses these challenges by creating a unified pipeline that supports: Speech-to-Text (STT) for recognizing user queries quickly and accurately. Natural Language Understanding (NLU) layers (potentially leveraging Azure OpenAI or other large language models) for comprehensive query interpretation. Text-to-Speech (TTS) that returns highly natural-sounding responses, possibly in multiple languages, with customized prosody and style. Avatar Rendering, which can be a 2D animated avatar or a more advanced 3D digital twin, bringing personality and facial expressions to the conversation. By using Azure AI Services—particularly the Speech and Custom Neural Voice offerings—can deliver brand-specific voices. This ensures that each brand or individual user’s avatar can match (or approximate) a signature voice, turning a run-of-the-mill voice assistant into a truly personal digital replicas 3. Technical Foundations 3.1 Speech-to-Text (STT) At the heart of the system is Azure AI Services for Speech, which provides: Real-time transcription capabilities with a variety of languages and dialects. Noise suppression, ensuring robust performance in busy environments. Streaming APIs, critical for real-time or near-real-time interactions. When a user speaks, audio data is captured (for example, via a web microphone feed or a phone line) and streamed to the Azure service. The recognized text is returned in segments, which the NLU or conversation manager can interpret. 3.1.1 Audio Pipeline Capture: The user’s microphone audio is captured by a front-end (e.g., a web app, mobile app, or IoT device). Pre-processing: Noise reduction or volume normalization might be applied locally or in the cloud, ensuring consistent input. Azure STT Ingestion: Data is sent to the Speech service endpoint, authenticated via subscription keys or tokens (more on security later). Result Handling: The recognized text arrives in partial hypotheses (partial transcripts) and final recognized segments. Project Maria (Custom Avatar) processes these results to understand user intent 3.2 Text-to-Speech (TTS) Once an intent is identified and a response is formulated, the system needs to deliver speech output. Standard Neural Voices: Microsoft provides a wide range of prebuilt voices in multiple languages. Custom Neural Voice: For an even more personalized experience, you can train a voice model that matches a brand spokesperson or a distinct voice identity. This is done using your custom datasets, ensuring the final system speaks exactly like the recorded persona. 3.2.1 Voice Font Selection and Configuration In a typical architecture: The conversation manager (which could be an orchestrator or a custom microservice) provides the text output to the TTS service. The TTS service uses a configured voice font—like en-US-JennyNeural or a custom neural voice ID (like Maria Neural Voice) if you have a specialized voice model. The synthesized audio is returned as an audio stream (e.g., PCM or MP3). You can play this in a webpage directly or in a native app environment. Azure OpenAI GPT-4o Real-Time API integrates with Azure's Speech Services to enable seamless interactions. First, your speech is transcribed in near real time. GPT-4o then processes this text to generate context-aware responses, which are converted to natural-sounding audio via Azure TTS. This audio is synchronized with avatar models to create a lifelike, engaging interface 3.3 Real-Time Conversational Loop Maria is designed for real-time or text to speech conversations. The user’s speech is continuously streamed to Azure STT. The recognized text triggers a real-time inference step for the next best action or response. The response is generated by Azure OpenAI model (like GPT-4o) or other LLM/SLM The text is then synthesized to speech, which the user hears with minimal latency. 3.4 Avatars: 2D and Beyond 3.4.1 Prebuilt Azure 2D Avatars Azure AI Speech Services includes an Avatar capability that can be activated to display a talking head or a 2D animated character. Developers can: Choose from prebuilt characters or import basic custom animations. Synchronize lip movements to the TTS output. Overlay brand-specific backgrounds or adopt transparency for embedding in various UIs. 3.4.2 Fully Custom Avatars (Customer Support Agent Like Maria) For organizations wanting a customer support agent, subject-matter expert, or brand ambassador: Capture: Record high-fidelity audio and video of the person you want to replicate. The more data, the better the outcome (though privacy and licensing must be considered). Modeling: Use advanced 3D or specialized 2D animation software (or partner with Microsoft’s custom avatar creation solutions) to generate a rigged model that matches the real person’s facial geometry and expressions. Integration: Once the model is rigged, it can be integrated with the TTS engine. As text is converted to speech, the avatar automatically animates lip shapes and facial expressions in near real time. 3.5 Latency and Bandwidth Considerations When building an interactive system, keep an eye on: Network latency: Real-time STT and TTS require stable, fast connections. Compute resources: If hosting advanced ML or high concurrency, scaling containers (e.g., via Docker and Kubernetes) is critical. Avatars: Real-time animation might require sending frames or instructions to a client’s browser or device. 4. Building the Model: Neural Voice Model Creation 4.1 Data Gathering To train a custom neural voice, you typically need: High-quality audio clips: Ideally recorded in a professional studio to minimize background noise, with the same microphone setup throughout. Matching transcripts for each clip. Minimum data duration: Microsoft recommends a certain threshold (e.g., 300+ utterances, typically around 30 minutes to a few hours of recorded speech, depending on the complexity of the final voice needed). 4.2 Training Process Data Upload: Use the Azure Speech portal or APIs to upload your curated dataset. Model Training: Azure runs training jobs that often require a few hours (or more). This step includes: Acoustic feature extraction (spectrogram analysis). Language or phoneme modeling for the relevant language and accent. Prosody tuning, ensuring the voice can handle various styles (cheerful, empathetic, urgent, etc.). Quality Checks: After training, you receive an initial voice model. You can generate test phrases to assess clarity, intonation, and overall quality. Iteration: If the voice quality is not satisfactory, you gather more data or refine the existing data (removing noisy segments or inaccurate transcripts). 4.3 Deployment Once satisfied with the custom neural voice: Deploy the model to an Azure endpoint within your subscription. Configure your TTS engine to use the custom endpoint ID instead of a standard voice. 5. Securing Avatar and Voice Models Security is paramount when personal data, brand identity, or intellectual property is on the line. 5.1 API Keys and Endpoints Azure AI Services requires an API key or an OAuth token to access STT/TTS features. Store keys in Azure Key Vault or as secure environment variables. Avoid hard-coding them in the front-end or source control. 5.2 Access Control Role-Based Access Control (RBAC) at both Azure subscription level and container (e.g., Docker or Kubernetes) level ensures only authorized personnel can deploy or manage the containers running these services. Network Security: Use private endpoints if you want to limit exposure to the public internet. 5.3 Intellectual Property Concerns Avatar and Voice Imitation: A avatar model and custom neural voice that mimics a specific individual must be authorized by that individual. Azure has a verification process in place to ensure consent. Data Storage: The training audio data and transcripts must be securely stored, often with encryption at rest and in transit. 6. Use Cases: Bringing It All Together 6.1 Customer Support A digital avatar that greets users on a website or mobile app can handle first-level queries: “Where can I find my billing information?” “What is your return policy?” By speaking these answers aloud with a friendly face and voice, the experience is more memorable and can reduce queue times for human agents. If the question is too complex, the avatar can seamlessly hand off to a live agent. Meanwhile, transcripts of the entire conversation are stored (e.g., in Azure Cosmos DB), enabling data analytics and further improvements to the system. 6.2 Safety Briefings and Public Announcements Industries like manufacturing, aviation, or construction must repeatedly deliver consistent safety messages. A personal avatar can recite crucial safety protocols in multiple languages, ensuring nothing is lost in translation. Because the TTS voice is consistent, workers become accustomed to the avatar’s instructions. Over time, you could even create a brand or site-specific “Safety Officer” avatar that fosters familiarity. 6.3 Digital Twins at Live Events Suppose you want your company’s spokesperson to simultaneously appear at multiple events across the globe. With a digital twin: The spokesperson’s avatar and voice “present” in real time, responding to local audience questions. This can be done in multiple languages, bridging communication barriers instantaneously. Attendees get a sense of personal interaction, while the real spokesperson can focus on core tasks, or appear physically at another event entirely. 6.4 AI Training and Education In e-learning platforms, a digital tutor can guide students through lessons, answer questions in real time, and adapt the tone of voice based on the difficulty of the topic or the student’s performance. By offering a face and voice, the tutor becomes more engaging than a text-only system. 7. Debut: Maria at the AI Leaders Summit in Seattle Project Maria had its first major showcase at the AI Leaders Summit in Seattle last week. We set up a live demonstration: Live Conversations: Attendees approached a large screen that displayed Maria’s 2D avatar. On-the-Fly: Maria recognized queries with STT, generated text responses from an internal knowledge base (powered by GPT-4o or domain-specific models), then spoke them back with a custom Azure neural voice. Interactive: The avatar lip-synced to the output speech, included animated gestures for emphasis, and even displayed text-based subtitles for clarity. The response was overwhelmingly positive. Customers praised the fluid voice quality and the lifelike nature of Maria’s avatar. Many commented that they felt they were interacting with a real brand ambassador, especially because the chosen custom neural voice had just the right inflections and emotional range. 8. Technical Implementation Details Below is a high-level architecture of how Project Maria might be deployed using containers and Azure resources. Front-End Web App: Built with a modern JavaScript framework (React, Vue, Angular, etc.). Captures user audio through the browser’s WebRTC or MediaStream APIs. Connects via WebSockets or RESTful endpoints for STT requests. Renders the avatar in a <canvas> element or using a specialized avatar library. Backend: Containerized with Docker. Exposes endpoints for STT streaming (optionally passing data directly to Azure for transcription). Integrates with the TTS service, retrieving synthesized audio buffers. Returns the audio back to the front-end in a continuous stream for immediate playback. Avatar Integration: The back-end or a specialized service handles lip-sync generation (e.g., via phoneme mapping from the TTS output). The front-end renders the 2D or 3D avatar in sync with the audio playback. This can be done by streaming timing markers that indicate which phoneme is currently active. Data and Conversation Storage: Use an Azure Cosmos DB or a similar NoSQL solution to store transcripts, user IDs, timestamps, and optional metadata (e.g., conversation sentiment). This data can later be used to improve the conversation model, evaluate performance, or train advanced analytics solutions. Security: All sensitive environment variables (like Azure API keys) are loaded securely, either through Azure Key Vault or container orchestration secrets. The system enforces user authentication if needed. For instance, an internal HR system might restrict the avatar-based service to employees only. Scaling: Deploy containers in Azure Kubernetes Service (AKS), setting up auto-scaling to handle peak loads. Monitor CPU/memory usage, as well as TTS quota usage. For STT, ensure the service tier can handle simultaneous requests from multiple users. 9. Securing Avatar Models and Voice Data 9.1 Identity Management Each avatar or custom neural voice is tied to a specific subscription. Using Azure Active Directory (Azure AD), you can give fine-grained permissions so that only authorized DevOps or AI specialists can alter or redeploy the voice. 9.2 API Gateways and Firewalls For enterprise contexts, you might place an API Gateway in front of your containerized services. This central gateway can: Inspect requests for anomalies, Enforce rate-limits, Log traffic to meet compliance or auditing requirements. 9.3 Key Rotation and Secrets Management Frequently rotates keys to minimize the risk of compromised credentials. Tools like Azure Key Vault or GitHub’s secret storage features can automate the rotation process, ensuring minimal downtime. 10. The Path Forward: Scaling Custom Avatar 10.1 Extended Personalization While Project Maria currently focuses on voice and basic facial expressions, future expansions include: Emotion Synthesis: Beyond standard TTS expressions (friendly, sad, excited), we can integrate emotional AI to dynamically adjust the avatar’s tone based on user sentiment. Gesture Libraries: 2D or 3D avatars can incorporate hand gestures, posture changes, or background movements to mimic a real person in conversation. This reduces the “uncanny valley” effect. 10.2 Multilingual, Multimodal As businesses operate globally, multilingual interactions become paramount. We have seen many use cases to: Auto-detect language from a user’s speech and respond in kind. Offer real-time translation, bridging non-English speakers to brand content. 10.3 Agent Autonomy Systems like Maria won’t just respond to direct questions; they can initiate proactivity: Send voice-based notifications or warnings when critical events happen. Manage long-running tasks such as scheduling or triaging user requests, akin to an “executive assistant” for multiple users simultaneously. 10.4 Ethical and Social Considerations With near-perfect replicas of voices, there is a growing concern about identity theft, misinformation, and deepfakes. Companies implementing digital twins must: Secure explicit consent from individuals. Implement watermarking or authentication for voice data. Educate customers and employees on usage boundaries and disclaimers 11. Conclusion Project Maria represents a significant leap in how businesses and organizations can scale their presence, offering a humanized, voice-enabled digital experience. By merging speech-to-text, text-to-speech, and avatar technologies, you can: Boost Engagement: A friendly face and familiar voice can reduce user fatigue and build emotional resonance. Extend Brand Reach: Appear in many locations at once via digital twins, creating personalized interactions at scale. Streamline Operations: Automate repetitive queries while maintaining a human touch, freeing up valuable employee time. Ensure Security and Compliance: By using Azure’s robust ecosystem of services and best practices for voice data. As demonstrated at the AI Leaders Summit in Seattle, Maria is already reshaping how businesses think about communication. The synergy of avatars, neural voices, and secure, cloud-based AI is paving the way for the next frontier in customer interaction. Looking ahead, we anticipate that digital twins—like Maria—will become ubiquitous, automating not just chat responses but a wide range of tasks that once demanded human presence. From personalized marketing to advanced training scenarios, the possibilities are vast. In short, the fusion of STT, TTS, and avatar technologies is more than a novel gimmick; it is an evolution in human-computer interaction. By investing in robust pipelines, custom neural voice training, and carefully orchestrated containerized deployments, businesses can unlock extraordinary potential. Project Maria is our blueprint for how to do it right—secure, customizable, and scalable—helping organizations around the world transform user experiences in ways that are both convenient and captivating. If you’re looking to scale your brand, innovate in human-machine dialogues, or harness the power of digital twins, we encourage you to explore Azure AI Services’ STT, TTS, and Avatar solutions. Together, these advancements promise a future where your digital self (or brand persona) can meaningfully interact with users anytime, anywhere. Detailed Technical Implementation:- https://learn.microsoft.com/en-us/azure/ai-services/speech-service/text-to-speech-avatar/what-is-custom-text-to-speech-avatar Text to Speech with Multi-Agent Orchestration Framework:- https://github.com/ganachan/Project_Maria_Accelerator_tts Contoso_Maria_Greetings.mp41.7KViews1like1Comment