updates
80 TopicsBringing AI to Meetings with the Sample Builder
We’re excited to share a significant update to the Azure Communication Services Sample Builder. This release integrates Azure’s latest AI and video calling capabilities, implementing meeting transcription and AI-generated call summaries to help organizations deliver insightful and effective meeting experiences. In just a few minutes, without the need for any coding, you can use the Sample Builder to start prototyping video calling with Azure AI integration. Click the link below to begin or continue reading for further information. 👉 Try the Sample Builder Note, this pattern of combining Azure Communication Services and Azure AI for meeting transcription and summarization is not limited to Sample Builder. You can take the code and overall design pattern and rebuild this experience using the underlying APIs and SDKs. What Is the Sample Builder? The Sample Builder is a no-code Azure Portal experience that you use to brand, customize, build, and deploy a GitHub based sample for prototyping. The sample integrates and deploys multiple Azure services for secure and engaging meetings: Application hosting of the meeting front-end is provided by Azure App Services High-definition video calling for mobile and desktop browsers is provided by Azure Communication Services Calling Role-based access for attendees and providers are implemented using Azure Communication Services Rooms Accessible, customizable, fluid user experience is built on the open-source Azure Communication Services UI library Designed for developers, IT teams, and solution architects, the Sample Builder gets you started quickly but doesn’t produce a production-ready application. After prototyping, you can take the code from GitHub, customize user experience, integrate your own systems, and fine-tune the AI interactions for production. Smarter Meetings with Transcription and Summarization Today’s update integrates Azure AI Speech and Azure AI Language services directly into your meetings, transforming how companies capture, understand, and act on conversations. You can fine-tune this integration and can take advantage of the latest innovation from Azure AI, ensuring your end-users benefit from advancements in LLM models, natural language understanding, and conversation summarization. Transcription and meeting summarization are valuable across industries. For example: Healthcare: Automatically document patient-provider interactions, reduce administrative burden, and support clinical accuracy. Financial Services: Capture detailed records of client meetings to meet regulatory requirements and improve transparency. Education: Provide students and instructors with accessible records of virtual sessions, supporting learning and retention. Real-Time Transcription With transcription enabled, Azure Communication Services uses Azure AI Speech to Text to convert spoken language into a live, speaker-attributed transcript. This allows participants to stay fully engaged in the conversation without worrying about taking notes. Key benefits include: Accurate, multilingual transcription for a wide variety of languages. You can see the full list of supported languages here Speaker attribution for clarity and accountability Searchable meeting records for easy reference and knowledge sharing Support for multilingual teams, with transcripts that can be translated or reviewed post-meeting Training and quality assurance, enabling review of real conversations to improve service delivery Transcripts can be stored securely and managed according to your organization’s compliance, privacy, and retention policies, making them suitable for regulated industries. AI-Generated Call Summaries After the meeting, the Azure AI Language Summarization API automatically analyzes the transcript and generates a concise, structured summary. This summary distills the conversation into key takeaways, including: Main discussion points Decisions made Action items and next steps This helps teams: Align quickly on outcomes and responsibilities Brief stakeholders who couldn’t attend Maintain consistent documentation for compliance, audits, or internal reporting Reduce meeting fatigue by eliminating the need to rewatch or reread entire transcripts How to Get Started You can try these new features today by following a few simple steps: Go through the Sample Builder using the official tutorial. Select the Rooms option in the booking and calling steps. Enable auto-transcription or allow users to turn on transcription in the booking and calling steps. Enable meeting summary in the post-call steps. Choose how you want to deploy and go through follow up steps. Once fully deployed, start a call to test. If not using auto-transcription, open the meeting controls and select “Start Transcription”. Choose the spoken language (Click here for the list of supported languages). After the meeting ends, participants can view the AI-generated summary and download the transcript. In the sample experience, transcripts and summaries are available temporarily. In production environments, you can store them securely and use them to support training, compliance, or analytics. Microsoft 365 Integration Today’s update focuses on integrating Azure AI with Azure hosted meetings. However Azure Communication Services are interoperable with Microsoft Teams, and you can use the Sample Builder to deploy a branded Azure application that joins Teams meetings. Interoperability can be incredibly helpful for organizations that are already using Microsoft Teams but want to build a custom meeting experience for business-2-consumer (B2C) interactions. Using Microsoft Teams as the meeting host allows you to leverage: Teams Premium AI features for generating meeting notes and recommending follow-ups. Teams Premium virtual appointment features for scheduling of B2C meetings and sending reminders across SMS, email, and other channels. Teams Phone capabilities so end-users can dial into the meeting using traditional telephony. Get Started Today Explore the new AI-powered features in the Sample Builder and start building smarter virtual appointment experiences: 👉 Try the Sample Builder With transcription and meeting summaries, your meetings can do more than connect people—they can capture insights, drive action, and deliver lasting value.146Views0likes0CommentsCatch Up on the Azure Communication Services Fundamentals Series
This April, we partnered with Microsoft Reactor to deliver a four-part webcast series designed to help developers get started with Azure Communication Services. Each 20-minute episode focused on a different communication channel—giving developers the tools to build real-time, scalable, and secure communication experiences into their apps. If you missed the live sessions, don’t worry, they’re all available on-demand! Check out the full playlist here or see the following individual videos. Here’s a quick look at what each episode covered: Episode 1: WhatsApp Messaging We kicked off the series by showing how to integrate WhatsApp Business messaging into your Azure Communication Services applications. We walked through everything from sandbox testing to connecting a verified WhatsApp Business Account and sending messages with SDKs. For a deeper dive on what was covered, read about it here from Gloria herself! Episode 2: Exploring SMS Capabilities This session described how to provision a phone number, verify the number, and how to send/receive SMS messages. We also covered how to handle incoming messages with event grid listeners and code-based handlers. For a deeper dive on exactly what was covered, read about it here from Pranita herself! Episode 3: Maximizing Email Insights with Logs and Events Next, we dove into email analytics and telemetry setting up logs and events, understanding sender reputation, and using sample queries to gain insights. From basic sample queries to advanced Kusto Query Language (KQL) queries, this session covered everything you need to run a successful email marketing campaign with Azure Communication Services. Episode 4: Add Audio & Video Calling We wrapped the season with a demo-rich session on embedding calling features into your communications application using Azure Communication Services. Highlights included new AI-powered features like captions, noise suppression, grid views, and real-time translation. What’s Next? We’re already planning Season 2, launching later this year, with a focus on Azure Communication Services + AI. Expect deeper dives, new use cases, and more interactive demos. Want to stay in the loop? Sign up for season two updates to be the first to know when the new season launches and tell us what you want to learn about!April 2025 Feature Updates
The Azure Communication Services team is excited to share several new product and feature updates released in March and April 2025. (You can view previous blog articles.) Real Time Text Status: GA Real-Time Text (RTT), now in General Availability, is a transformative feature that enables text to be transmitted and displayed in real-time during voice and video calls. Unlike traditional messaging, where the recipient sees the full message only after it is sent, RTT ensures that each character appears on the recipient’s screen as it is typed. This creates a dynamic, conversational experience that mirrors spoken communication. For developers, integrating RTT into their applications can significantly enhance user engagement and accessibility. By providing immediate and continuous text communication, RTT bridges the gap between spoken and written interactions, making conversations more inclusive and effective. This is particularly valuable in scenarios where clarity and real-time feedback are crucial, such as telehealth, remote banking, and customer support. RTT also complies with European accessibility mandates, ensuring that voice and video calling services meet regulatory standards. By incorporating RTT, developers can offer a more inclusive communication experience, catering to users with diverse needs and preferences. For more information, see: Real Time Text (RTT) Overview - An Azure Communication Services concept document | Microsoft Learn Real Time Text - An Azure Communication Services how-to document | Microsoft Learn Mobile Numbers for SMS Status: Public Preview We are excited to announce the Public Preview of Mobile Numbers for SMS in Azure Communication Services. This feature is now available in 10 countries across Europe and Australia, providing businesses with locally trusted, two-way messaging capabilities. Key Benefits Dedicated Sender Identity: Supports conversational scenarios across industries like healthcare, retail, public sector, and finance. Interactive Conversations: Facilitates reply-enabled SMS flows with country-specific mobile numbers. Improved Deliverability: Uses number types accepted and prioritized by local carriers. Use Cases: Inbound Communication: Customers can reply to alerts, offers, or initiate conversations, ensuring a seamless and engaging communication experience. For more information, see: Phone number types - An Azure Communication Services article | Microsoft Learn 1080p Web Send Status: Public Preview The Azure Communication Services WebJS calling SDK now supports sending video at 1080p resolution. By enabling this advanced video quality, developers can build solutions that enable their customers to use higher quality and more detailed video presentations. This enhancement provides developers and end-users with improved calling experience. Developers can opt-in to this feature using the Video Constraints API (see the following example), enabling higher quality video streams in their applications. const callOptions = { videoOptions: { localVideoStreams: [...], constraints: { send: { height: { max: 1080 } } } }, audioOptions: { muted: false } }; // make a call this.callAgent.startCall(identitiesToCall, callOptions); For more information, see: Place video on a web page based on resolution size - An Azure Communication Services quickstart | Microsoft Learn Dual Pin 720p videos Status: Public Preview The Azure Communication Services Calling SDK now enables developers to spotlight up to two incoming video streams. These spotlighted streams can be sent at a higher resolution than other incoming streams, providing better quality for highlighted videos. This feature enables developers to deliver a more engaging and visually rich experience for end-users. For more information, see Place video on a web page based on resolution size - An Azure Communication Services quickstart | Microsoft Learn Background Blur for Android Mobile browsers Status: GA The Azure Communication Services Calling SDK for Android Web mobile now enables developers to implement background blur. By enabling background blur, end users can enjoy calls with increased privacy and confidence, knowing that their background is obscured and won’t cause any disruptions during the call. For more information, see Quickstart - Add video effects to your video calls - An Azure Communication Services quickstart | Microsoft Learn. Mobile Browsers support sending 720p resolution video Status: GA The Azure Communication Services Calling SDK for Web mobile browser now enables developers to send video by default at a 720p video resolution. Enabling a higher video resolution to be sent from mobile browsers provides all users with higher quality video and more intimate calling experience. For more information, see Azure Communication Services Calling SDK overview - An Azure Communication Services concept document | Microsoft Learn.Explore Azure Communication Services Email Telemetry: What You Can Do with Email Insights
We're halfway through our first season of the Azure Communication Services Fundamentals series, and we are excited to invite you to our next session - Maximizing Email Insights with Logs and Events on Azure Communication Services. Register here and join us on Thursday, April 17 @ 9 a.m. PT. What You Will Learn During this live session, we’ll guide you through the following key topics to help you make the most out your email communication service with Azure Communication Services. Introduction to Email Telemetry What email telemetry is and why it matters The differences between logs and events in the Azure portal Understanding Sender Reputation Gain insights into sender reputation and its impact on email delivery and engagement Explore how Azure Communication Services helps you manage sender reputation Live Demo: Email Insights & Logs Discover how to use sample log queries to analyze email performance and troubleshoot delivery issues Learn how to create custom queries and workbooks to visualize and understand the data available in your logs Live Demo: Email events Deploy and view live events in Event Grid Viewer Real-time monitoring and analysis of email events Why You Should Attend This session is perfect for developers, IT professionals, and tech enthusiasts who want to better understand email analytics. You'll gain practical knowledge on using Azure Communication Services to extract insights from email logs and events, helping you improve your email communication. How to Join To register for this event, sign up here on the Reactor event homepage. If you want to engage with members of the Email product team live during the session, log into a YouTube account during the live session and ask questions directly in the chat. If you missed the previous two sessions, you can watch the on-demand videos for those sessions here: WhatsApp Messaging and Azure Communication Services Exploring SMS Capabilities with Azure Communication ServicesLearning Series: Azure Communication Services Fundamentals
Are you looking to integrate powerful communication features like WhatsApp, SMS, email, or video calling into your applications? Join us for "Azure Communication Services Learning Series | Fundamentals", a weekly livestream designed to introduce you to the L100 coding concepts of Azure Communication Services. Learn more and register for the upcoming sessions here! What to Expect Our live, interactive sessions will be held every Thursday from 9:00 AM to 9:20 AM PST. Each session will be led by an Azure Communication Services expert and will include code and set up in Azure portal. Whether you're a developer, IT professional, or tech enthusiast, these sessions will help you add communication capabilities into your solutions. Schedule Here’s what’s coming in April: WhatsApp Messaging and Azure Communication Services Date: Thursday, April 3, 2025 | Time: 9:00 AM - 9:20 AM PST Learn how to programmatically send and receive WhatsApp business messages using Azure. This session covers everything from connecting your WhatsApp business account with ACS to handling messages in code and setting up event grid notifications. Exploring SMS Capabilities with Azure Communication Services Date: Thursday, April 10, 2025 | Time: 9:00 AM - 9:20 AM PST Discover how to add SMS capabilities to your app. This session will guide you through setting up an ACS resource, acquiring a phone number, and handling incoming SMS messages using an event grid listener. Maximizing Email Insights with Logs and Events on Azure Communication Services Date: Thursday, April 17, 2025 | Time: 9:00 AM - 9:20 AM PST Dive into email analytics with ACS! Learn how to set up email logs, track sender reputation, and use sample log queries to gain valuable insights. Add Audio Video Calling Into Your Apps Date: Thursday, April 24, 2025 | Time: 9:00 AM - 9:20 AM PST In this session, you'll build a demo app with a user-friendly interface for audio and video calls. Discover new ACS features like grid views, captions, noise suppression, and translation. Who Should Attend? This series is perfect for: Developers looking to integrate communication features into their applications. IT Professionals aiming to understand ACS implementation for their organization. Tech Enthusiasts eager to explore Microsoft’s communication services platform. Mark your calendar, join us every Thursday, and let's build together!Azure Communication Services technical documentation table of contents update
Technical documentation is like a map for using a platform—whether you're building services, solving problems, or learning new features, great documentation shows you the way to the solution you need. But what good is a map if it’s hard to read or confusing to follow? That’s why easy-to-navigate documentation is so important. It saves time, reduces frustration, and helps users focus on what they want to achieve. Azure Communication Services is a powerful platform, and powerful platforms require great documentation for both new and experienced developers. Our customers tell us consistently that our docs are a crucial part of their experience of using our platform. Some studies suggest that documentation and samples are the most important elements of a great developer experience. In this update, we’re excited to share how we’ve improved our technical documentation’s navigation to make it quicker and simpler than ever to find the information you need when you need it. Why did we change? In order for our content to be useful to you, it first needs to be findable. When we launched Azure Communication Services, the small number of articles on our site made it easy to navigate and find relevant content. As we’ve grown, though, our content became harder to find for users due to the quantity of articles they need to navigate. To refresh your memory, the table of contents on our docs site used to be structured with these base categories: Overview Quickstart Tutorials Samples Concepts Resources References These directory names describ e the type of content they contain. This structure is a very useful model for products with a clearly-defined set of use cases, where typically a customer’s job-to-be-done is more constrained, but it breaks down when used for complex, powerful platforms that support a broad range of use cases in the way that Azure Communication Services does. We tried a number of small-scale changes to address the problems people were having on our site, such as having certain directories default to open on page load, but as the site grew, we became concerned that our site navigation model was becoming confusing to users and having a negative impact on their experience with our product. We decided to test that hypothesis and consider different structures that might serve our content and our customers better. Our user research team interviewed 18 customers with varying levels of experience on our platform. The research uncovered several problems that customers were having with the way our docs navigation was structured. From confusing folder titles, to related topics being far away from each other in the nav model, to general confusion around what folder titles meant, to problems finding some of the most basic information about using our platform, and a host of other issues, our user research made it clear to us that we had a problem that we needed to fix for our users. What did we change in this release? To help address these issues, we made a few key changes to make our table of contents simpler and easier to navigate. The changes we made were strictly to site navigation, not page content, and they include: We've restructured the root-level navigation to be focused on communication modality and feature type, rather than content type, to better model our customers' jobs-to-be-done. Topics include All supported communication channels Horizontal features that span more than one channel Topics of special interest to our customers, like AI Basic needs, like troubleshooting and support This will allow customers to more easily find the content they need by focusing on the job they need to do, rather than on the content type. We've simplified the overview and fundamentals sections to make the site less overwhelming on first load. We've surfaced features that customers told us were difficult to find, such as UI Library, Teams interop, and Job router. We've organized the content within each directory to roughly follow a beginner->expert path to make content more linear, and to make it easier for a user to find the next step in completing their task. We've removed unnecessary layers in our nav, making content easier to find. We've added a link to pricing information to each primitive to address a common customer complaint, that pricing information is difficult to find and understand. We've combined quickstarts, samples, and tutorials into one directory per primitive, called "Samples and tutorials", to address a customer complaint that our category names were confusing. We added a directory to each primitive for Resources, to keep important information close by. We added root-level directories for Common Scenarios, Troubleshooting, and Help and support. We did a full pass across all TOC entries to ensure correct casing, and edited entries for readability and consistency with page content, as well as for length to adhere to Microsoft guidelines and improve readability. These changes have led us to a structure that we feel less taxing for the reader, especially on first visit, maps more closely to the customer’s mental model of the information by focusing on the job-to-be-done rather than content type, helps lead them through the content from easiest to hardest, helps make it easier for them to find the information they need when they need it, and helps remind them of all the different features we support. Here’s what the table of contents looks like on page load as of Feb 6: These changes are live now. You can see them on the Azure Communication Services Technical documentation site. What’s next: In the coming weeks we will continue to make refinements based on customer feedback and our assessment of usage metrics. Our content team will begin updating article content to improve readability and enhance learning. We will be monitoring our changes and seeking your feedback. How will we monitor the effectiveness of our changes? To track the effectiveness of our changes and to be sure we haven’t regressed, we’ll be tracking a few key metrics Bounce rates: We’ll be on the lookout for an increase in bounce rates, which would indicate that customers are frequently landing on pages that don’t meet their expectations. Page Views: We’ll be tracking the number of page views for our most-visited pages across different features. A decrease in page views for these pages will be an indicator that customers are not able to find pages that had previously been popular. Customer Interviews: We will be reaching out to some of you to get your impressions of the new structure of our content over the coming weeks. Customer Surveys: We've created a survey that you can use to give us your feedback. We'll also be adding this link to select pages to allow you to tell us what you think of our changes while you're using them! So, give our new site navigation a try, and please don’t hesitate to share your feedback either by filling out our survey or by sending an email to acs-docs-feedback@microsoft.com. We look forward to hearing from you! A1.4KViews2likes2CommentsCopilot in Azure is now integrated in the Voice and Video Insights dashboard
We’re excited to announce the public preview of Copilot in Azure integrations in the Voice and Video Insights dashboard of your Azure Portal. This dashboard provides overview and debugging guidance on your Azure Communication Services calls. By integrating Copilot, we’ve expanded the capabilities of the existing data visualizations in this dashboard. This Copilot enhancement enables communication developers to chat with Copilot to better identify, understand, and improve calling issues that impact their end-users' calling experiences. By embedding Copilot in Azure in the Voice and Video Insights dashboard, communication developers get AI-driven, natural-language summaries of their calling insights, tailored to improve their end-user's call experience. These AI-driven conversations enable communication developers to quickly prioritize and focus their call improvement efforts. Instead of reviewing large volumes of public documentation to interpret their call logs, developers can chat with Copilot directly from the dashboard to get personalized guidance on their call issues. What to expect from your Voice and Video Insights Dashboard We understand the importance of having Copilot in Azure customized to your debugging workflow and visuals, which is why we’ve built multiple Copilot buttons in the Volume and Reliability tabs to give you starting points for your conversations with Copilot. Each Copilot button starts a conversation based on the visuals you’re reviewing in the dashboard. Since we know how important it is to quickly get inspiration and understand the types of questions Copilot can help you with as a developer, each Copilot button has a built-in, click-to-run starter prompt. Our customized Copilot experience can be accessed by clicking a Copilot button, which opens a chat panel for you to interact with. For example, in the Volume section, you can click on the “Ask Copilot about Azure Communication Services SDK” button, or in the User Facing Diagnostics (UFD) section, you can click Ask Copilot for help next to the UFD you want to focus on. Lastly, Copilot suggests follow-up prompts based on the context of your chat, to help you further investigate the issue at hand. Using Copilot in Azure embedded in Voice and Video Insights As communication developers, you’re focused on developing and maintaining high-quality calling experiences for your end users. We’re enhancing Copilot in Azure to directly support this core need by providing capabilities such as: Track your call usage and details Diagnose and reduce call issues By structuring the Voice and Video Insights features around these key areas, we aim to empower you with AI-driven insights and recommendations that accurately articulate the issues and solutions, streamlining workflows and enhancing your ability to improve your call performance more effectively. In our public preview, we’re announcing Copilot buttons that help communication developers in two key scenarios: Monitor your call composition and update your SDK with Copilot in Azure Troubleshoot in-call issues with Copilot in Azure Monitor your call composition and update your SDK with Copilot in Azure As a communication developer, you need tools that scale with your organization's growth and increasing solution complexity. The Copilot buttons in the Volume section of Voice and Video Insights offer high-level insights across all calls in your resource. To quickly understand your call volume and composition visuals you can click the Copilot buttons for explanations and guidance on best practices. You can also learn about recent improvements in the Calling SDK and how to update your SDK for optimal performance. Troubleshoot in-call issues with Copilot in Azure We recognize that even the best-designed calling solutions can encounter issues. The User Facing Diagnostics (UFD) section is designed to help you identify and address such issues. Given the multitude of variables that can impact users' call experiences, we provide detailed call data to help pinpoint potential root causes and solutions. The UFD tab enables you to use Copilot for specific insights about how different types of UFDs affect user experiences. Copilot interprets what each UFD signifies and offers practical advice on mitigating these issues to enhance user satisfaction. Moreover, Copilot can share code samples that help you surface in-call notifications, proactively alerting users to call issues, and suggesting measures like unmuting their microphone to mitigate impact. How to get started We are thrilled to bring you these advanced Copilot integrations, empowering you to achieve greater efficiency and effectiveness in managing and improving your communication services. Stay tuned for ongoing updates and enhancements that further elevate your development experience. Happy coding! Learn how to get started with Azure Copilot in Voice and Video Insights.February 2025 Feature Updates
The Azure Communication Services team is excited to share several new product and feature updates released in January 2025. You can view previous blog articles. 1. Calling Native SDKs add calling to Teams call queues and auto attendants Status: GA Calling Native SDKs can now place calls to a Teams call queue or auto attendant. After answering the call, video calling and screenshare are available to both the Teams and Azure Communication Services users. These features are available in the Calling SDKs for Android, iOS, and Windows. See the Quickstart documentation for more details. For more information, see: Contact center scenarios Teams Call Queue on Azure Communication Services Teams Auto Attendant on Azure Communication Services 2. Calling Web & Graph Beta SDKs add Teams shared line appearance Status: Public Preview Microsoft Teams shared line appearance lets a user choose a delegate to answer or handle calls on their behalf. This feature is helpful if a user has an administrative assistant who regularly handles the user's calls. In the context of Teams shared line appearance, a manager is someone who authorizes a delegate to make or receive calls on their behalf. A delegate can make or receive calls on behalf of the delegator. For more information, see: Microsoft Teams shared line appearance Tutorial - Teams Shared Line Appearance 3. Number Lookup API Status: GA We are excited to announce the General Availability of the Number Lookup API. Azure Communication Services enables you to validate the number format, retrieve insights and look up a specific phone number using the Communication Services Number Lookup SDK. This new function is part of the Phone Numbers SDK and can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Number Lookup enables you to reliably retrieve number insights (format, type, location, carrier, and so on) before engaging with end users. For more information, see: Number Lookup API concepts in Azure Communication Services Look up operator information for a phone number using Azure Communication Services 4. Updated navigation for technical documentation Status: Live In response to customer feedback and multiple customer interviews, we’re excited to announce an update to the navigational model of our technical documentation. We’ve adjusted the structure of our docs site navigation to make it quicker and simpler than ever to find the information you need when you need it. For more information, see: Azure Communication Services technical documentation table of contents update | Microsoft Community Hub Stay connected with our latest updates Never miss an update! Click the 'Follow' button to get notified about new blog posts and feature releases, or click here to check out our previous blog posts.January 2025 Feature Updates
The Azure Communication Services team is excited to share several new product and feature updates released at the end of 2024 and January 2025. (You can view previous blog articles.) To kick off the new year, we’re excited to introduce several new features, organized into two key areas for clarity: Enhancing User Experience in Voice and Video Calling. New features like Picture-in-Picture (PiP) in the Calling Native iOS SDK, Real Time Text (RTT), managing SMS opt-out preferences, and more feature parity with Teams in the Web calling SDK enhance accessibility and engagement during calls. Advanced Management and Analytics for Communication Services. Teams admins can now view Azure Communication Services survey data, while developers can identify web calling participants with custom data tags improving overall communication management. Enhancing User Experience in Voice and Video Calling 1. SMS Opt-Out Management API Status: Public Preview The Opt-Out Management API is now available in Public Preview for Azure Communication Services. The Opt-Out Management API empowers developers to programmatically manage SMS opt-out preferences, enabling businesses to handle opt-out workflows seamlessly and ensure compliance with global messaging regulations. Unlike static opt-out management processes, where handling preferences is often manual and disconnected, this API introduces automation and flexibility. With endpoints for adding, removing, and checking opt-out entries, developers can centralize management across multiple channels and create smarter workflows that align with customer preferences and regulatory requirements. For example, a business can manage custom opt-out workflows where customers opt out via SMS and later update their preferences through a web portal. The Opt-Out Management API ensures these changes are synchronized in real time, providing businesses with complete control over compliance and transparency. Why is this important? Effective opt-out management is a cornerstone of responsible and compliant SMS communication. The Opt-Out Management API provides the tools to: Ensure Compliance: By automating opt-out workflows, businesses can meet regulatory requirements, reducing the risk of violations. Improve Efficiency: Replace manual processes with automation to streamline operations, particularly for large-scale messaging campaigns. Enhance Customer Trust: Enable customers to manage their preferences across different platforms, ensuring a transparent and consistent experience. string connectionString = "<Your_Connection_String>"; SmsClient smsClient = new SmsClient(connectionString); smsClient.OptOuts.Add("<from-phone-number>", new List<string> { "<to-phone-number1>", "<to-phone-number2>" }); Release Timeline Now Available: Public Preview release for the Opt-Out Management API. Future Plans: Enhancements based on feedback will inform the timeline for General Availability (GA), which will be announced later. Get Started with the Opt-Out Management API Developers and organizations can begin exploring the API now with the following resources: Conceptual Documentation: Short Message Service (SMS) Opt-Out Management API for Azure Communication Services - An Azure Communication Services concept document | Microsoft Learn QuickStart Guide: Send OptOut API requests with API (HMAC) | Microsoft Learn We’re looking forward to seeing how businesses leverage the Opt-Out Management API to build smarter, compliant messaging workflows. 2. Real Time Text (RTT) Status: Public Preview Another feature coming to Azure Communication Services is Real Time Text (RTT). Real-time text (RTT) is a system for transmitting text over the internet that enables the recipient to receive and display the text at the same rate as it is being produced without the user needed to hit send, giving the effect of immediate and continuous communication. Unlike traditional chat messaging, where the recipient sees the full message only after it is completed and sent, RTT provides an immediate and continuous stream of communication. For example, in a video or voice call, a user typing "Hello, how are you?" sees each character appear on the recipient’s screen as they type: "H," then "He," then "Hel," and so on. This messaging of text creates a dynamic, conversational experience that mirrors spoken communication. We added new APIs to Azure Communication Services Calling SDKs so that developers can easily and seamlessly integrate RTT into voice and video calls. These APIs also work in tandem with other accessibility features such as closed captions. Why is this important? RTT is an accessibility feature, and Microsoft is committed to accessibility. This commitment is especially relevant to Azure Communication Services, as the ability to inclusively reach as many humans as possible is an essential value proposition of a developer platform that connects people to people, and people to AI. Here’s how RTT makes a difference: Better Accessibility: RTT empowers individuals with speech or hearing impairments to actively participate in conversations. Its real-time functionality ensures their input is received as fluidly and immediately as spoken words, creating equitable and inclusive communication experiences. Enhancing Clarity: In environments where background noise or technical limitations affect audio quality, RTT serves as a reliable text-based alternative to convey important messages accurately. As communication moves increasingly to internet-based platforms, features like RTT play a critical role in making digital interactions more inclusive and accessible. RTT is not only a valuable feature—it is also essential for meeting global accessibility standards. Under the European Accessibility Act (Directive (EU) 2019/882), voice and video calling services in the European Union will be required to support RTT by June 2025. Azure Communication Services is committed to providing solutions that meet these evolving standards, ensuring that all users, regardless of ability, can engage in meaningful, accessible communication. Release Timeline Now Available: Public Preview release for the Native Calling SDK, enabling RTT for all voice and video calls except Azure Communication Services/Teams interop scenarios. January 2025: Public Preview release for the Native UI SDK, Web SDK, with the Web UI following later in the month. Future Plans: RTT functionality for Azure Communication Services/Teams interop scenarios will be available following its implementation in Teams. The General Availability (GA) timeline will be announced later this month. Developers and organizations can begin exploring RTT in Azure Communication Services now with the following resources: Conceptual Documentation: https://learn.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/real-time-text Native QuickStart Guide: https://learn.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/get-started-with-real-time-text?pivots=programming-language-csharp 3. Calling Native iOS SDK enable Picture-in-Picture Status: GA Multitasking has become an essential part of how we work and communicate today. With this in mind, Azure Communication Services has introduced Picture-in-Picture (PiP) mode for video calling applications. This powerful feature enhances user experience by allowing a video stream to continue in a floating, movable window while users navigate other applications on their devices. What Is Picture-in-Picture (PiP) Mode? PiP mode lets users keep their video calls visible and uninterrupted as they switch between apps or multitask. For example, healthcare professionals can input electronic health records (EHR) in Epic while maintaining video communication with patients. Similarly, users in industries like banking or customer service can seamlessly switch to other tasks without ending the call. How It Works The Native Calling SDK and UI makes it simple to implement PiP in your app. It provides built-in functionality for: Joining calls: Start and manage calls effortlessly. Rendering video streams: Display local and remote video streams within the PiP window. Managing permissions: The SDK handles user consent and system requirements, ensuring smooth operation of PiP. PiP keeps calls active in both the foreground and background. This ensures uninterrupted communication while users: Navigate to other apps. Switch between video streams. Return to the calling experience instantly via the floating PiP window. Why PiP Matters A traditional full-screen video UI can limit multitasking, but PiP empowers users to stay productive without sacrificing connectivity. Key benefits include: Improved workflow in multitasking scenarios. Continued access to video calls while using other apps. An intuitive user interface with minimal interruption. Technical Considerations PiP functionality depends on the capabilities of the device, such as CPU performance, RAM, and battery state. Supported devices ensure the PiP window is visible, movable, and easy to use, regardless of the app in focus. This feature further enhances the Azure Communication Services UI Library, enabling customers like Contoso to maintain active calls, even when navigating between custom activities like chat or task management. For more information, see QuickStart: https://learn.microsoft.com/en-us/azure/communication-services/how-tos/ui-library-sdk/picture-in-picture?tabs=kotlin&pivots=platform-android 4. Explicit consent for Teams meetings recording and transcription Web calling SDK: General availability Explicit consent for Teams meetings recording and transcription is now generally available in the Web calling SDK, enhancing user privacy and security. This feature ensures that participants must explicitly consent to being recorded and transcribed, which is crucial in environments with stringent privacy regulations. When a Teams meeting recording or transcription is initiated, participants' microphones and cameras are disabled until they provide consent using the new Azure Communication Services API. Once consent is given, participants can unmute and enable their cameras. If a user joins a meeting already in progress, they will follow the same procedure. However, this feature is not supported in Android, iOS, or Windows calling SDK, nor in the Web and Mobile UI library. It is only supported in Teams meetings and Teams group calls, with plans to expand within the broader Azure Communication Services ecosystem. To implement explicit consent for recording and transcription in your Teams meetings, you can use the following sample code to check if consent is required and to grant consent: const isConsentRequired = callRecordingApi.isTeamsConsentRequired; callRecordingApi.grantTeamsConsent(); Try out the new explicit consent feature in your Teams meetings today and ensure compliance with privacy regulations. For more information, read the detailed documentation. 5. Breakout rooms in Web calling SDK Status: Web calling SDK & Web UI library: General availability Breakout rooms are now available in the Web Calling SDK, enhancing flexibility and collaboration in online meetings. This feature allows participants to join smaller, focused groups within a larger meeting, boosting productivity and engagement. Whether it's dividing students into small groups for focused discussions, ensuring private and confidential discussions with clients, or conducting virtual consultations with private patient discussions, breakout rooms offer versatile and useful applications Breakout rooms enable participants to join an additional call linked to the main meeting. Users can join and return to the main room as set by the organizers. Participants can view members, engage in chat, and see details of the breakout room. Breakout room managers can access specific room information and join them. One limitation is that Azure Communication Services does not support the creation or management of breakout rooms, and this feature is not available in Android, iOS, and Windows calling SDK. Try out the new breakout rooms feature in the Web calling SDK and UI library today! For more information, read the detailed documentation. 6. Together mode in Web calling SDK Status: Web calling SDK - General availability Together Mode in the Web Calling SDK, available in Azure Communication Services, enhances virtual meetings by placing participants in a shared background, making it feel like everyone is in the same room. This feature uses AI to segment and arrange participants naturally, reducing distractions and improving focus. By creating a more immersive and engaging meeting experience, Together Mode helps teams feel more connected, even when they are miles apart. This feature is particularly useful for users with poor connectivity, as it allows them to save bandwidth by receiving a single video stream of all participants. Whether you're collaborating on a complex project, conducting training sessions, or holding virtual consultations, Together Mode ensures clear and focused communication. It enhances the overall meeting experience, making it more effective and engaging for various industries. Try out Together Mode in your next virtual meeting and experience the difference it makes. For more information and detailed instructions, visit the documentation. 7. Disable attendee's audio and video Web calling SDK: General availability The new media access control feature in the Web Calling SDK allows organizers, co-organizers, and presenters to manage attendees' audio and video in Microsoft Teams meetings and group calls. This feature provides enhanced control over participants' ability to enable their microphone or camera during a session, ensuring a more focused and controlled meeting experience. By reducing distractions and maintaining the meeting's flow, media access control helps create a more productive environment. With this feature, you can manage access for individuals or all attendees in the call, providing the flexibility to tailor the meeting experience as needed. Additionally, you can learn about the current media access of individual users and Teams meeting options, allowing you to provide the optimal user experience. // Define list of attendees const acsUser = new CommunicationUserIdentifier('<USER_ID>'); const teamsUser = new MicrosoftTeamsUserIdentifier('<USER_ID>'); const participants = [acsUser, teamsUser]; // Allow selected attendees to unmute mediaAccessFeature.permitAudio(participants); // Deny selected attendees to unmute mediaAccessFeature.forbidAudio(participants); // Allow selected attendees to turn on video mediaAccessFeature.permitVideo(participants); // Deny selected attendees to turn on video mediaAccessFeature.forbidVideo(participants); Try out the new Media access control feature today. For more detailed instructions and information, please refer to the documentation. Advanced Management and Analytics for Communication Services 1. Teams admins can view Azure Communication Services survey data in Teams support tools Status: GA When your Azure Communication Services SDKs submit a survey as part of any Teams interop meeting scenarios, the survey data will now be accessible through the Teams meeting organizer's support tools. This is in addition to the Azure Communication Services admins access in the Azure Monitor logs. This update lets Teams admins analyze subjective quality feedback from their Azure Communication Services meeting participants alongside their Teams participants. The specific Teams survey dimensions are referred to as ‘rating’ and can be located here. The Azure Communication Services survey data is available in the following Teams support tools: Teams Call Quality Dashboard and Teams Call Analytics: Monitor and improve call quality for Microsoft Teams Teams Call Quality Connector for Power BI: Use Power BI to analyze CQD data for Microsoft Teams - Microsoft Teams Teams Graph API: Microsoft Graph overview and userFeedback resource type – Microsoft Graph v1.0 For more information, see Azure Communication Services End of Call Survey overview 2. Identify web calling participants with custom data tags Status: GA Now developers can add up to three custom data attributes to call participants with the WebJS calling client and view them in Azure Monitor. You can use these customizable attributes to enhance your post-call analysis. Since you have control over the data creation, you can use it for A/B testing, labeling (e.g., west coast, release version, etc.). You can use Call Diagnostics to search for these attributes or create custom queries with Log Analytics. For more information, see: Tutorial on how to attach custom tags to your client telemetry