AutoGen
13 TopicsBuilding Custom Chat AI: A Comprehensive Guide for Developers
In today's rapidly evolving digital landscape, the integration of artificial intelligence (AI) into business operations has become a pivotal strategy for companies aiming to enhance their customer engagement and streamline their processes. This article delves into the foundational steps and considerations for developers embarking on the journey of building a custom chat AI for their company website. From understanding the core concepts of AI to selecting the right models and implementing effective prompt engineering techniques, this guide provides a comprehensive overview to help developers navigate the complexities of AI development. Whether you are a beginner or have some experience in the field, the insights shared here will equip you with the knowledge and tools needed to create a robust and efficient chat AI tailored to your business needs. A discussion will be held with Nitya Narasimhan, Senior Cloud Advocate at Microsoft specializing in AI, and Wey Gu, a Chinese AI MVP, to delve into these critical topics. What are the first steps a developer should take when starting to build a custom chat AI for their company website? Nitya: If you are new to AI, start by familiarizing yourself with the core concepts and usage of AI models. A course like Generative AI for Beginnerscan be a great starting point. Next, get hands-on experience with models by trying out GitHub Models, which are free to use with just a GitHub account. This will help you build your intuition for model selection and prompt engineering. If you already have some experience, the initial steps to building a custom chat AI are as follows: Identify the use case and requirements (e.g., typical questions asked and valid responses). Choose a model to start prototyping (test the question with various models and compare results). If your chat AI is grounded in your data, identify the data sources and formats (where and what). Select an AI app template to jumpstart development and customize it with your model and data choices. How does understanding model choice impact the development of a custom chat AI? Nitya: Understanding model choice is crucial for developing a custom chat AI. It involves evaluating models based on three key factors: cost, customization, and performance. Customization: Start by identifying the task you want to execute (e.g., chat, image, embeddings, agents). Filter models that support this capability and validate them with a test prompt to ensure they fit your requirements. This process will narrow down your options from thousands to a few suitable models. Cost: Consider whether the model supports serverless deployments (pay-as-you-go, per token) or managed deployments (subscription-based, per VM). Evaluate costs not just for usage (chat completion) but also for end-to-end development (evaluations, iterative ideation). Performance: Assess models based on latency (e.g., chat completions vs. reasoning models) and the quality and safety of responses. Understand default model characteristics (model card) and perform custom evaluations to ensure quality for your desired prompts dataset. Can you explain the concept of prompt engineering and how it can be applied using GitHub models? Nitya: Prompt engineering involves guiding the model on how to process questions and generate responses to improve quality. Think of developers as teachers and models as students being taught to answer exam questions. Prompt engineering provides a rubric to guide models in giving relevant answers. This includes providing examples, creating personas (e.g., "answer politely using formal language"), defining output formats (e.g., "answer in 1-2 sentences", "reply with results in JSON format"), and configuring model parameters (e.g., temperature, stop-words, top-p, max tokens). When working with GitHub models, you can configure models using the Playground (UI) or move to an IDE with the Azure AI Inference API, offering both low-code and code-first options for prompt engineering. What is retrieval augmented generation (RAG), and how does it enhance the ability to chat with data? Wei: RAG involves grounding user questions in retrieved knowledge from private data sources to ensure responses are relevant to the application scenario. It works by wrapping the initial user prompt in a prompt template to create the final model prompt sent to the model. The RAG workflow includes retrieval of knowledge, augmentation of the prompt, and generation of the response. This dynamic process provides relevant grounding data and instructions to contextualize user questions for app-required responses. What are some practical tips for developers to streamline their end-to-end journey from catalog to cloud? Nitya: Here are three tips to get started: Model Selection: Use GitHub Models with diverse test prompts to build intuition for prompt engineering and model capabilities. Compare models side-by-side. Copilot Development: Start with an Azure AI app template. Deploy it to understand the application and its architecture before customizing it to your needs. Validate your development environment and get familiar with tools. Safety & Evaluation: Explore built-in content safety filters and evaluators in the Azure AI platform to understand metrics and effectiveness of your prompt engineering or RAG strategy. Use tracing and App Insights to monitor performance and cost. What are some common challenges developers might face when building a custom chat AI, and how can they overcome them? Nitya and Wei: There are many challenges we can think of - here are three that are important: App Architecture: Understand the app architecture for your scenario (e.g., RAG, multi-agent). Explore existing AI app templates to build intuition and customize one that fits your requirements. Model Choice: Choose models based on cost, quota availability, and flexibility for future configuration. Use the Azure AI model inference API to abstract provider-specific SDKs and decouple your code from your choice, allowing for easier model swaps later. Observability: Debug issues in app development or execution performance. Use platforms and tools that bring observability to the end-to-end workflow. Activate App Insights and use tracing tools to generate telemetry for insights locally or in production. What resources and samples are available for further exploration into this subject? Wei: Explore Azure AI App Templates, For Beginners Curricula, RAG Chat Workshop, AI Tour Workshops and Generative AI for Beginners. For more workshops and talks, visit https://aka.ms/aitour/repos. Feel free to check out opensource projects like AutoGen, LlamaIndex, LangChain, and CamelAI's documentation. As we conclude this exploration into building a custom chat AI for your company website, it's clear that the journey is both challenging and rewarding. By understanding the core concepts of AI, selecting the right models, and mastering prompt engineering, developers can create a powerful tool that enhances customer engagement and streamlines business operations. The insights and practical tips shared in this article provide a solid foundation for embarking on this journey. Remember, the key to success lies in continuous learning and adaptation. As AI technology evolves, you should also adapt your approach to developing and refining your chat AI. Stay curious, stay innovative, and most importantly, stay committed to delivering the best possible experience for your users.230Views1like0CommentsAI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
This blog post, Part 7 in a series on AI agents, focuses on the Planning Design Pattern for effective task orchestration. It explains how to define clear goals, decompose complex tasks into manageable subtasks, and leverage structured output (e.g., JSON) for seamless communication between agents. The post includes code snippets demonstrating how to create a planning agent, orchestrate multi-agent workflows, and implement iterative planning for dynamic adaptation. It also links to a practical example notebook (07-autogen.ipynb) and further resources like AutoGen Magnetic One, encouraging readers to explore advanced planning concepts. Links to the previous posts in the series are provided for easy access to foundational AI agent concepts.1.5KViews1like0CommentsHow to use any Python AI agent framework with free GitHub Models
I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, when the desire to learn is high but the available cash is low. That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). With access to such a range of models, you can prototype complex multi-model workflows to improve your productivity or heck, just make something fun for yourself. 🤗 To use GitHub Models, you can start off in no-code mode: open the playground for a model, send a few requests, tweak the parameters, and check out the answers. When you're ready to write code, select "Use this model". A screen will pop up where you can select a programming language (Python/JavaScript/C#/Java/REST) and select an SDK (which varies depending on model). Then you'll get instructions and code for that model, language, and SDK. But here's what's really cool about GitHub Models: you can use them with all the popular Python AI frameworks, even if the framework has no specific integration with GitHub Models. How is that possible? The vast majority of Python AI frameworks support the OpenAI Chat Completions API, since that API became a defacto standard supported by many LLM API providers besides OpenAI itself. GitHub Models also provide OpenAI-compatible endpoints for chat completion models. Therefore, any Python AI framework that supports OpenAI-like models can be used with GitHub Models as well. 🎉 To prove it, I've made a new repository with examples from eight different Python AI agent packages, all working with GitHub Models: python-ai-agent-frameworks-demos. There are examples for AutoGen, LangGraph, Llamaindex, OpenAI Agents SDK, OpenAI standard SDK, PydanticAI, Semantic Kernel, and SmolAgents. You can open that repository in GitHub Codespaces, install the packages, and get the examples running immediately. Now let's walk through the API connection code for GitHub Models for each framework. Even if I missed your favorite framework, I hope my tips here will help you connect any framework to GitHub Models. OpenAI I'll start with openai , the package that started it all! import openai client = openai.OpenAI( api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") The code above demonstrates the two key parameters we'll need to configure for all frameworks: api_key : When using OpenAI.com, you pass your OpenAI API key here. When using GitHub Models, you pass in a Personal Access Token (PAT). If you open the repository (or any repository) in GitHub Codespaces, a PAT is already stored in the GITHUB_TOKEN environment variable. However, if you're working locally with GitHub Models, you'll need to generate a PAT yourself and store it. PATs expire after a while, so you need to generate new PATs every so often. base_url : This parameter tells the OpenAI client to send all requests to "https://models.inference.ai.azure.com" instead of the OpenAI.com API servers. That's the domain that hosts the OpenAI-compatible endpoint for GitHub Models, so you'll always pass that domain as the base URL. If we're working with the new openai-agents SDK, we use very similar code, but we must use the AsyncOpenAI client from openai instead. Lately, Python AI packages are defaulting to async, because it's so much better for performance. import agents import openai client = openai.AsyncOpenAI( base_url="https://models.inference.ai.azure.com", api_key=os.environ["GITHUB_TOKEN"]) model = agents.OpenAIChatCompletionsModel( model="gpt-4o", openai_client=client) spanish_agent = agents.Agent( name="Spanish agent", instructions="You only speak Spanish.", model=model) PydanticAI Now let's look at all of the packages that make it really easy for us, by allowing us to directly bring in an instance of either OpenAI or AsyncOpenAI . For PydanticAI, we configure an AsyncOpenAI client, then construct an OpenAIModel object from PydanticAI, and pass that model to the agent: import openai import pydantic_ai import pydantic_ai.models.openai client = openai.AsyncOpenAI( api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") model = pydantic_ai.models.openai.OpenAIModel( "gpt-4o", provider=OpenAIProvider(openai_client=client)) spanish_agent = pydantic_ai.Agent( model, system_prompt="You only speak Spanish.") Semantic Kernel For Semantic Kernel, the code is very similar. We configure an AsyncOpenAI client, then construct an OpenAIChatCompletion object from Semantic Kernel, and add that object to the kernel. import openai import semantic_kernel.connectors.ai.open_ai import semantic_kernel.agents chat_client = openai.AsyncOpenAI( api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") chat = semantic_kernel.connectors.ai.open_ai.OpenAIChatCompletion( ai_model_id="gpt-4o", async_client=chat_client) kernel.add_service(chat) spanish_agent = semantic_kernel.agents.ChatCompletionAgent( kernel=kernel, name="Spanish agent" instructions="You only speak Spanish") AutoGen Next, we'll check out a few frameworks that have their own wrapper of the OpenAI clients, so we won't be using any classes from openai directly. For AutoGen, we configure both the OpenAI parameters and the model name in the same object, then pass that to each agent: import autogen_ext.models.openai import autogen_agentchat.agents client = autogen_ext.models.openai.OpenAIChatCompletionClient( model="gpt-4o", api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com") spanish_agent = autogen_agentchat.agents.AssistantAgent( "spanish_agent", model_client=client, system_message="You only speak Spanish") LangGraph For LangGraph, we configure a very similar object, which even has the same parameter names: import langchain_openai import langgraph.graph model = langchain_openai.ChatOpenAI( model="gpt-4o", api_key=os.environ["GITHUB_TOKEN"], base_url="https://models.inference.ai.azure.com", ) def call_model(state): messages = state["messages"] response = model.invoke(messages) return {"messages": [response]} workflow = langgraph.graph.StateGraph(MessagesState) workflow.add_node("agent", call_model) SmolAgents Once again, for SmolAgents, we configure a similar object, though with slightly different parameter names: import smolagents model = smolagents.OpenAIServerModel( model_id="gpt-4o", api_key=os.environ["GITHUB_TOKEN"], api_base="https://models.inference.ai.azure.com") agent = smolagents.CodeAgent(model=model) Llamaindex I saved Llamaindex for last, as it is the most different. The llama-index package has a different constructor for OpenAI.com versus OpenAI-like servers, so I opted to use that OpenAILike constructor instead. However, I also needed an embeddings model for my example, and the package doesn't have an OpenAIEmbeddingsLike constructor, so I used the standard OpenAIEmbedding constructor. import llama_index.embeddings.openai import llama_index.llms.openai_like import llama_index.core.agent.workflow Settings.llm = llama_index.llms.openai_like.OpenAILike( model="gpt-4o", api_key=os.environ["GITHUB_TOKEN"], api_base="https://models.inference.ai.azure.com", is_chat_model=True) Settings.embed_model = llama_index.embeddings.openai.OpenAIEmbedding( model="text-embedding-3-small", api_key=os.environ["GITHUB_TOKEN"], api_base="https://models.inference.ai.azure.com") agent = llama_index.core.agent.workflow.ReActAgent( tools=query_engine_tools, llm=Settings.llm) Choose your models wisely! In all of the examples above, I specified the gpt-4o model. The gpt-4o model is a great choice for agents because it supports function calling, and many agent frameworks only work (or work best) with models that natively support function calling. Fortunately, GitHub Models includes multiple models that support function calling, at least in my basic experiments: gpt-4o gpt-4o-mini o3-mini AI21-Jamba-1.5-Large AI21-Jamba-1.5-Mini Codestral-2501 Cohere-command-r Ministral-3B Mistral-Large-2411 Mistral-Nemo Mistral-small You might find that some models work better than others, especially if you're using agents with multiple tools. With GitHub Models, it's very easy to experiment and see for yourself, by simply changing the model name and re-running the code. Join the AI Agents Hackathon We are currently running a free virtual hackathon from April 8th - 30th, to challenge developers to create agentic applications using Microsoft technologies. You could build an agent entirely using GitHub Models and submit it to the hackathon for a chance to win amazing prizes! You can also join our 30+ streams about building AI agents, including a stream all about prototyping with GitHub Models. Learn more and register at https://aka.ms/agentshack1.9KViews3likes0CommentsAI Agents: Building Trustworthy Agents- Part 6
This blog post, Part 6 in a series on AI agents, focuses on building trustworthy AI agents. It emphasizes the importance of safety and security in agent design and deployment. The post details a system message framework for creating robust and scalable prompts, outlining a four-step process from meta prompt to iterative refinement. It then explores various threats to AI agents, including task manipulation, unauthorized access, resource overloading, knowledge base poisoning, and cascading errors, providing mitigation strategies for each. The post also highlights the human-in-the-loop approach for enhanced trust and control, providing a code example using AutoGen. Finally, it links to further resources on responsible AI, model evaluation, and risk assessment, along with the previous posts in the series.642Views3likes0CommentsThe Launch of "AI Agents for Beginners": Your Gateway to Building Intelligent Systems
🌱 Getting Started Each lesson covers fundamental aspects of building AI Agents. Whether you're a novice or have some experience, you'll find valuable insights and practical knowledge. We also support multiple languages, so you can learn in your preferred language. To see the available languages, click here. If this is your first time working with Generative AI models, we highly recommend our "Generative AI For Beginners" course, which includes 21 lessons on building with GenAI. Remember to star (🌟) this repository and fork it to run the code! 📋 What You Need The course includes code examples that you can find in the code_samples folder. Feel free to fork this repository to create your own copy. The exercises utilize Azure AI Foundry and GitHub Model Catalogs for interacting with Language Models: Github Models - Free / Limited Azure AI Foundry - Azure Account Required We also leverage the following AI Agent frameworks and services from Microsoft: Azure AI Agent Service Semantic Kernel AutoGen For more information on running the code for this course, visit the Course Setup. 🙏 Want to Help? We welcome contributions from the community! If you have suggestions or spot any errors, please raise an issue or create a pull request. If you encounter any difficulties or have questions about building AI Agents, join our Azure AI Community on Discord. 📂 Each Lesson Includes A written lesson located in the README (Videos Coming March 2025) Python code samples supporting Azure AI Foundry and Github Models (Free) Links to extra resources to continue your learning 🗃️ Lessons Overview Intro to AI Agents and Use Cases Exploring Agentic Frameworks Understanding Agentic Design Patterns Tool Use Design Pattern Agentic RAG Building Trustworthy AI Agents Planning Design Pattern Multi-Agent Design Pattern Metacognition Design Pattern AI Agents in Production 🌐 Multi-Language Support We offer translations in several languages and will updating these on a regular basis. 🚀 Go Fork or Clone this repo and get started on your AI Agents journey 🤖 at https://aka.ms/ai-agents-beginners15KViews3likes4CommentsProject Maria: Bringing Speech and Avatars Together for Next-Generation Customer Experiences
In an age where digital transformation influences nearly every aspect of business, companies are actively seeking innovative ways to differentiate their customer interactions. Traditional text-based chatbots, while helpful, often leave users wanting a more natural, personalized, and efficient experience. Imagine hosting a virtual brand ambassador—a digital twin of yourself or your organization’s spokesperson—capable of answering customer queries in real time with a lifelike voice and expressive 2D or 3D face. This is where Project Maria comes in. Project Maria is an internal Microsoft initiative that integrates cutting-edge speech-to-text (STT), text-to-speech (TTS), large language model and avatar technologies. Using Azure AI speech and custom neural voice models, it seeks to create immersive, personalized interactions for customers—reducing friction, increasing brand loyalty, and opening new business opportunities in areas such as customer support, product briefings, digital twins, live marketing events, safety briefings, and beyond. In this blog post, we will dive into: The Problem and Rationale for evolving beyond basic text-based solutions. Speech-to-Text (STT), Text-to-Speech (TTS) Pipelines, Azure OpenAI GPT-4o Real-Time API that power natural conversations. Avatar Models in Azure, including off-the-shelf 2D avatars and fully customized custom avatar Neural Voice Model Creation, from data gathering to training and deployment on Azure. Security and Compliance considerations for handling sensitive voice assets and data. Use Cases from customer support to digital brand ambassadors and safety briefings. Real-World Debut of Project Maria, showcased at the AI Leaders’ Summit in Seattle. Future Outlook on how custom avatar will reshape business interactions, scale presence, and streamline time-consuming tasks. If you’re developing or considering a neural (custom) voice + avatar models for your product or enterprise, this post will guide you through both conceptual and technical details to help you get started—and highlight where the field is heading next. 1. The Problem: Limitations of Text-Based Chatbots 1.1 Boredom and Fatigue in Text Interactions Text-based chatbots have come a long way, especially with the advent of powerful Large Language Models (LLMs) and Small Large Models (SLMs). Despite these innovations, interactions can still become tedious—often requiring users to spend significant personal time crafting the right questions. Many of us have experienced chatbots that respond with excessively verbose or repetitive messages, leading to boredom or even frustration. In industries that demand immediacy—like healthcare, finance, or real-time consumer support—purely text-based exchanges can feel slow and cumbersome. Moreover, text chat requires a user’s full attention to read and type, whether in a busy contact center environment or an internal knowledge base where employees juggle multiple tasks. 1.2 Desire for More Engaging and Efficient Modalities Today’s users expect something closer to human conversation. Devices ranging from smartphones to smart speakers and in-car infotainment systems have normalized voice-based interfaces. Adding an avatar—whether a 2D or 3D representation—deepens engagement by combining speech with a friendly visual persona. This can elevate brand identity: an avatar that looks, talks, and gestures like your company’s brand ambassador or a well-known subject-matter expert. 1.3 The Need for Scalability In a busy customer support environment, human representatives simply can’t handle an infinite volume of conversations or offer 24/7 coverage across multiple channels. Automation is essential, yet providing high-quality automated interactions remains challenging. While a text-based chatbot might handle routine queries, a voice-based, avatar-enabled agent can manage more complex requests with greater dynamism and personality. By giving your digital support assistant both a “face” and a voice aligned with your brand, you can foster deeper emotional connections and provide a more genuine, empathetic experience. This blend of automation and personalization scales your support operations, ensuring higher customer satisfaction while freeing human agents to focus on critical or specialized tasks. 2. The Vision: Project Maria’s Approach Project Maria addresses these challenges by creating a unified pipeline that supports: Speech-to-Text (STT) for recognizing user queries quickly and accurately. Natural Language Understanding (NLU) layers (potentially leveraging Azure OpenAI or other large language models) for comprehensive query interpretation. Text-to-Speech (TTS) that returns highly natural-sounding responses, possibly in multiple languages, with customized prosody and style. Avatar Rendering, which can be a 2D animated avatar or a more advanced 3D digital twin, bringing personality and facial expressions to the conversation. By using Azure AI Services—particularly the Speech and Custom Neural Voice offerings—can deliver brand-specific voices. This ensures that each brand or individual user’s avatar can match (or approximate) a signature voice, turning a run-of-the-mill voice assistant into a truly personal digital replicas 3. Technical Foundations 3.1 Speech-to-Text (STT) At the heart of the system is Azure AI Services for Speech, which provides: Real-time transcription capabilities with a variety of languages and dialects. Noise suppression, ensuring robust performance in busy environments. Streaming APIs, critical for real-time or near-real-time interactions. When a user speaks, audio data is captured (for example, via a web microphone feed or a phone line) and streamed to the Azure service. The recognized text is returned in segments, which the NLU or conversation manager can interpret. 3.1.1 Audio Pipeline Capture: The user’s microphone audio is captured by a front-end (e.g., a web app, mobile app, or IoT device). Pre-processing: Noise reduction or volume normalization might be applied locally or in the cloud, ensuring consistent input. Azure STT Ingestion: Data is sent to the Speech service endpoint, authenticated via subscription keys or tokens (more on security later). Result Handling: The recognized text arrives in partial hypotheses (partial transcripts) and final recognized segments. Project Maria (Custom Avatar) processes these results to understand user intent 3.2 Text-to-Speech (TTS) Once an intent is identified and a response is formulated, the system needs to deliver speech output. Standard Neural Voices: Microsoft provides a wide range of prebuilt voices in multiple languages. Custom Neural Voice: For an even more personalized experience, you can train a voice model that matches a brand spokesperson or a distinct voice identity. This is done using your custom datasets, ensuring the final system speaks exactly like the recorded persona. 3.2.1 Voice Font Selection and Configuration In a typical architecture: The conversation manager (which could be an orchestrator or a custom microservice) provides the text output to the TTS service. The TTS service uses a configured voice font—like en-US-JennyNeural or a custom neural voice ID (like Maria Neural Voice) if you have a specialized voice model. The synthesized audio is returned as an audio stream (e.g., PCM or MP3). You can play this in a webpage directly or in a native app environment. Azure OpenAI GPT-4o Real-Time API integrates with Azure's Speech Services to enable seamless interactions. First, your speech is transcribed in near real time. GPT-4o then processes this text to generate context-aware responses, which are converted to natural-sounding audio via Azure TTS. This audio is synchronized with avatar models to create a lifelike, engaging interface 3.3 Real-Time Conversational Loop Maria is designed for real-time or text to speech conversations. The user’s speech is continuously streamed to Azure STT. The recognized text triggers a real-time inference step for the next best action or response. The response is generated by Azure OpenAI model (like GPT-4o) or other LLM/SLM The text is then synthesized to speech, which the user hears with minimal latency. 3.4 Avatars: 2D and Beyond 3.4.1 Prebuilt Azure 2D Avatars Azure AI Speech Services includes an Avatar capability that can be activated to display a talking head or a 2D animated character. Developers can: Choose from prebuilt characters or import basic custom animations. Synchronize lip movements to the TTS output. Overlay brand-specific backgrounds or adopt transparency for embedding in various UIs. 3.4.2 Fully Custom Avatars (Customer Support Agent Like Maria) For organizations wanting a customer support agent, subject-matter expert, or brand ambassador: Capture: Record high-fidelity audio and video of the person you want to replicate. The more data, the better the outcome (though privacy and licensing must be considered). Modeling: Use advanced 3D or specialized 2D animation software (or partner with Microsoft’s custom avatar creation solutions) to generate a rigged model that matches the real person’s facial geometry and expressions. Integration: Once the model is rigged, it can be integrated with the TTS engine. As text is converted to speech, the avatar automatically animates lip shapes and facial expressions in near real time. 3.5 Latency and Bandwidth Considerations When building an interactive system, keep an eye on: Network latency: Real-time STT and TTS require stable, fast connections. Compute resources: If hosting advanced ML or high concurrency, scaling containers (e.g., via Docker and Kubernetes) is critical. Avatars: Real-time animation might require sending frames or instructions to a client’s browser or device. 4. Building the Model: Neural Voice Model Creation 4.1 Data Gathering To train a custom neural voice, you typically need: High-quality audio clips: Ideally recorded in a professional studio to minimize background noise, with the same microphone setup throughout. Matching transcripts for each clip. Minimum data duration: Microsoft recommends a certain threshold (e.g., 300+ utterances, typically around 30 minutes to a few hours of recorded speech, depending on the complexity of the final voice needed). 4.2 Training Process Data Upload: Use the Azure Speech portal or APIs to upload your curated dataset. Model Training: Azure runs training jobs that often require a few hours (or more). This step includes: Acoustic feature extraction (spectrogram analysis). Language or phoneme modeling for the relevant language and accent. Prosody tuning, ensuring the voice can handle various styles (cheerful, empathetic, urgent, etc.). Quality Checks: After training, you receive an initial voice model. You can generate test phrases to assess clarity, intonation, and overall quality. Iteration: If the voice quality is not satisfactory, you gather more data or refine the existing data (removing noisy segments or inaccurate transcripts). 4.3 Deployment Once satisfied with the custom neural voice: Deploy the model to an Azure endpoint within your subscription. Configure your TTS engine to use the custom endpoint ID instead of a standard voice. 5. Securing Avatar and Voice Models Security is paramount when personal data, brand identity, or intellectual property is on the line. 5.1 API Keys and Endpoints Azure AI Services requires an API key or an OAuth token to access STT/TTS features. Store keys in Azure Key Vault or as secure environment variables. Avoid hard-coding them in the front-end or source control. 5.2 Access Control Role-Based Access Control (RBAC) at both Azure subscription level and container (e.g., Docker or Kubernetes) level ensures only authorized personnel can deploy or manage the containers running these services. Network Security: Use private endpoints if you want to limit exposure to the public internet. 5.3 Intellectual Property Concerns Avatar and Voice Imitation: A avatar model and custom neural voice that mimics a specific individual must be authorized by that individual. Azure has a verification process in place to ensure consent. Data Storage: The training audio data and transcripts must be securely stored, often with encryption at rest and in transit. 6. Use Cases: Bringing It All Together 6.1 Customer Support A digital avatar that greets users on a website or mobile app can handle first-level queries: “Where can I find my billing information?” “What is your return policy?” By speaking these answers aloud with a friendly face and voice, the experience is more memorable and can reduce queue times for human agents. If the question is too complex, the avatar can seamlessly hand off to a live agent. Meanwhile, transcripts of the entire conversation are stored (e.g., in Azure Cosmos DB), enabling data analytics and further improvements to the system. 6.2 Safety Briefings and Public Announcements Industries like manufacturing, aviation, or construction must repeatedly deliver consistent safety messages. A personal avatar can recite crucial safety protocols in multiple languages, ensuring nothing is lost in translation. Because the TTS voice is consistent, workers become accustomed to the avatar’s instructions. Over time, you could even create a brand or site-specific “Safety Officer” avatar that fosters familiarity. 6.3 Digital Twins at Live Events Suppose you want your company’s spokesperson to simultaneously appear at multiple events across the globe. With a digital twin: The spokesperson’s avatar and voice “present” in real time, responding to local audience questions. This can be done in multiple languages, bridging communication barriers instantaneously. Attendees get a sense of personal interaction, while the real spokesperson can focus on core tasks, or appear physically at another event entirely. 6.4 AI Training and Education In e-learning platforms, a digital tutor can guide students through lessons, answer questions in real time, and adapt the tone of voice based on the difficulty of the topic or the student’s performance. By offering a face and voice, the tutor becomes more engaging than a text-only system. 7. Debut: Maria at the AI Leaders Summit in Seattle Project Maria had its first major showcase at the AI Leaders Summit in Seattle last week. We set up a live demonstration: Live Conversations: Attendees approached a large screen that displayed Maria’s 2D avatar. On-the-Fly: Maria recognized queries with STT, generated text responses from an internal knowledge base (powered by GPT-4o or domain-specific models), then spoke them back with a custom Azure neural voice. Interactive: The avatar lip-synced to the output speech, included animated gestures for emphasis, and even displayed text-based subtitles for clarity. The response was overwhelmingly positive. Customers praised the fluid voice quality and the lifelike nature of Maria’s avatar. Many commented that they felt they were interacting with a real brand ambassador, especially because the chosen custom neural voice had just the right inflections and emotional range. 8. Technical Implementation Details Below is a high-level architecture of how Project Maria might be deployed using containers and Azure resources. Front-End Web App: Built with a modern JavaScript framework (React, Vue, Angular, etc.). Captures user audio through the browser’s WebRTC or MediaStream APIs. Connects via WebSockets or RESTful endpoints for STT requests. Renders the avatar in a <canvas> element or using a specialized avatar library. Backend: Containerized with Docker. Exposes endpoints for STT streaming (optionally passing data directly to Azure for transcription). Integrates with the TTS service, retrieving synthesized audio buffers. Returns the audio back to the front-end in a continuous stream for immediate playback. Avatar Integration: The back-end or a specialized service handles lip-sync generation (e.g., via phoneme mapping from the TTS output). The front-end renders the 2D or 3D avatar in sync with the audio playback. This can be done by streaming timing markers that indicate which phoneme is currently active. Data and Conversation Storage: Use an Azure Cosmos DB or a similar NoSQL solution to store transcripts, user IDs, timestamps, and optional metadata (e.g., conversation sentiment). This data can later be used to improve the conversation model, evaluate performance, or train advanced analytics solutions. Security: All sensitive environment variables (like Azure API keys) are loaded securely, either through Azure Key Vault or container orchestration secrets. The system enforces user authentication if needed. For instance, an internal HR system might restrict the avatar-based service to employees only. Scaling: Deploy containers in Azure Kubernetes Service (AKS), setting up auto-scaling to handle peak loads. Monitor CPU/memory usage, as well as TTS quota usage. For STT, ensure the service tier can handle simultaneous requests from multiple users. 9. Securing Avatar Models and Voice Data 9.1 Identity Management Each avatar or custom neural voice is tied to a specific subscription. Using Azure Active Directory (Azure AD), you can give fine-grained permissions so that only authorized DevOps or AI specialists can alter or redeploy the voice. 9.2 API Gateways and Firewalls For enterprise contexts, you might place an API Gateway in front of your containerized services. This central gateway can: Inspect requests for anomalies, Enforce rate-limits, Log traffic to meet compliance or auditing requirements. 9.3 Key Rotation and Secrets Management Frequently rotates keys to minimize the risk of compromised credentials. Tools like Azure Key Vault or GitHub’s secret storage features can automate the rotation process, ensuring minimal downtime. 10. The Path Forward: Scaling Custom Avatar 10.1 Extended Personalization While Project Maria currently focuses on voice and basic facial expressions, future expansions include: Emotion Synthesis: Beyond standard TTS expressions (friendly, sad, excited), we can integrate emotional AI to dynamically adjust the avatar’s tone based on user sentiment. Gesture Libraries: 2D or 3D avatars can incorporate hand gestures, posture changes, or background movements to mimic a real person in conversation. This reduces the “uncanny valley” effect. 10.2 Multilingual, Multimodal As businesses operate globally, multilingual interactions become paramount. We have seen many use cases to: Auto-detect language from a user’s speech and respond in kind. Offer real-time translation, bridging non-English speakers to brand content. 10.3 Agent Autonomy Systems like Maria won’t just respond to direct questions; they can initiate proactivity: Send voice-based notifications or warnings when critical events happen. Manage long-running tasks such as scheduling or triaging user requests, akin to an “executive assistant” for multiple users simultaneously. 10.4 Ethical and Social Considerations With near-perfect replicas of voices, there is a growing concern about identity theft, misinformation, and deepfakes. Companies implementing digital twins must: Secure explicit consent from individuals. Implement watermarking or authentication for voice data. Educate customers and employees on usage boundaries and disclaimers 11. Conclusion Project Maria represents a significant leap in how businesses and organizations can scale their presence, offering a humanized, voice-enabled digital experience. By merging speech-to-text, text-to-speech, and avatar technologies, you can: Boost Engagement: A friendly face and familiar voice can reduce user fatigue and build emotional resonance. Extend Brand Reach: Appear in many locations at once via digital twins, creating personalized interactions at scale. Streamline Operations: Automate repetitive queries while maintaining a human touch, freeing up valuable employee time. Ensure Security and Compliance: By using Azure’s robust ecosystem of services and best practices for voice data. As demonstrated at the AI Leaders Summit in Seattle, Maria is already reshaping how businesses think about communication. The synergy of avatars, neural voices, and secure, cloud-based AI is paving the way for the next frontier in customer interaction. Looking ahead, we anticipate that digital twins—like Maria—will become ubiquitous, automating not just chat responses but a wide range of tasks that once demanded human presence. From personalized marketing to advanced training scenarios, the possibilities are vast. In short, the fusion of STT, TTS, and avatar technologies is more than a novel gimmick; it is an evolution in human-computer interaction. By investing in robust pipelines, custom neural voice training, and carefully orchestrated containerized deployments, businesses can unlock extraordinary potential. Project Maria is our blueprint for how to do it right—secure, customizable, and scalable—helping organizations around the world transform user experiences in ways that are both convenient and captivating. If you’re looking to scale your brand, innovate in human-machine dialogues, or harness the power of digital twins, we encourage you to explore Azure AI Services’ STT, TTS, and Avatar solutions. Together, these advancements promise a future where your digital self (or brand persona) can meaningfully interact with users anytime, anywhere. Detailed Technical Implementation:- https://learn.microsoft.com/en-us/azure/ai-services/speech-service/text-to-speech-avatar/what-is-custom-text-to-speech-avatar Text to Speech with Multi-Agent Orchestration Framework:- https://github.com/ganachan/Project_Maria_Accelerator_tts Contoso_Maria_Greetings.mp41.7KViews1like1CommentBuilding AI Agent Applications Series - Using AutoGen to build your AI Agents
In the previous content, we learned about AI Agent. If you didn't read it, please read my previous content - Understanding AI Agents. We have many different frameworks to implement AI Agents. AutoGen from Microsoft is a relatively mature AI Agents framework. Now AutoGen is mainly based on two programming languages .NET and Python. The more mature version is the Python version. The content in this article is mainly based on the Python version https://microsoft.github.io/autogen. If you want to learn the .NET version, you can visit here https://microsoft.github.io/autogen-for-net40KViews1like3CommentsMicrosoft Semantic Kernel and AutoGen: Open Source Frameworks for AI Solutions
Explore Microsoft’s open-source frameworks, Semantic Kernel and AutoGen. Semantic Kernel enables developers to create AI solutions across various domains using a single Large Language Model (LLM). AutoGen, on the other hand, uses AI Agents to perform smart tasks through agent dialogues. Discover how these technologies serve different scenarios and can be used to build powerful AI applications.47KViews6likes1Comment