Customer Engagement
19 TopicsPurple CTO Shares His Azure Communication Services Experience for a Secure Contact Center
In 2022, we knew legacy on-premise systems were not going to sustain us as a fully independent contact center solution for the dozens of customers we serve. At Purple, we needed a cloud-based solution that enabled us to deliver better customer support and engagement—and allowed us to stay competitive in the market. We looked at four options, but it was an easy decision for us: Azure Communication Services bridges the gap between outdated infrastructure and a secure, scalable platform, enabling our entire business to expand its services while ensuring data is securely managed and compliant with regulatory standards. How we transformed Purple’s technological base with Azure Communication Services Our previous investments in Microsoft Teams and Teams Direct Routing for PSTN connectivity aligned seamlessly with Azure Communication Service’s interoperable framework. By adopting ACS, we modernized our technological stack and expanded our service capabilities to include reception and delegation services. Azure Communication Service’s efficiency has allowed us to develop a cost-effective, reliable solution with minimal development effort while also addressing data storage and compliance requirements. Sensitive customer data is now stored securely within customers’ Azure tenants, enhancing security and regulatory compliance. Integrating AI for enhanced contact center capabilities The migration and integration processes presented logistical and technical challenges, particularly in transferring large volumes of PSTN minutes and seamlessly transitioning services for existing customers without disrupting their operations. But our team at Purple did a great job integrating ACS into client operations, which has bolstered our position in the contact center market. Leveraging ACS features—such as call automation, direct routing, job router, call recording, transcription, and media functionalities—we enhanced our communication capabilities to support chat, email, and SMS services. We also tap into several Microsoft AI technologies to improve our contact center capabilities. Services like speech-to-text (STT), text-to-speech (TTS), transcription, summarization, and sentiment analysis provide actionable insights for businesses and agents. For optimized performance, planned integrations with Copilot studio let managers and customers query specific contact center metrics, such as agent availability and peak interaction times. Flexibility and scalability translate to cost-effectiveness for customers With ACS’s flexibility and scalability, we've developed a business model centered around cost-effectiveness and reliability. Its pay-as-you-go structure supports unlimited agents and queues, charging customers based on usage, which has reduced our costs by up to 50% and improved stability by 83% compared to older solutions. At Purple, we offer granular billing that differentiates costs for VoIP minutes, call recordings, and transcriptions. Integration with platforms like Salesforce, Jira, and Dynamics365 further streamlines operations, and helps us deliver a seamless, high-quality, cost-effective experience for all of our clients. We are excited about the AI-driven collaboration with Microsoft, which enhances our voice, chat, and CRM integration services, delivering significant value to our customers. This partnership will optimize the end-user experience, seamlessly integrate existing customer data, and provide a more cost-effective solution for businesses to scale and elevate their customer interactions. - Purple Chief Technology Officer Tjeerd VerhoeffBuild your own real-time voice agent - Announcing preview of bidirectional audio streaming APIs
We are pleased to announce the public preview of bidirectional audio streaming, enhancing the capabilities of voice based conversational AI. During Satya Nadella’s keynote at Ignite, Seth Juarez demonstrated a voice agent engaging in a live phone conversation with a customer. You can now create similar experiences using Azure Communication Services bidirectional audio streaming APIs and GPT 4o model. In ourrecent Ignite blog post, we announced the upcoming preview of our audio streaming APIs. Now that it is publicly available, this blog describes how to use the bidirectional audio streaming APIs available in Azure Communication Services Call Automation SDK to build low-latency voice agents powered by GPT 4o Realtime API. How does the bi-directional audio streaming API enhance the quality of voice-driven agent experiences? AI-powered agents facilitate seamless, human-like interactions and can engage with users through various channels such as chat or voice. In the context of voice communication, low latency in conversational responses is crucial as delays can cause users to perceive a lack of response and disrupt the flow of conversation. Gone are the days when building a voice bot required stitching together multiple models for transcription, inference, and text-to-speech conversion. Developers can now stream live audio from an ongoing call (VoIP or telephony) to their backend server logic using the bi-directional audio streaming APIs, leverage GPT 4o to process audio input, and deliver responses back with minimal latency for the caller/user. Building Your Own Real-Time Voice Agent In this section, we walk you through a QuickStart for using Call Automation’s audio streaming APIs for building a voice agent. Before you begin, ensure you have the following: Active Azure Subscription: Create an account for free. Azure Communication Resource: Create an Azure Communication Resource and record your resource connection string for later use. Azure Communication Services Phone Number: A calling-enabled phone number. You can buy a new phone number or use a free trial number. Azure Dev Tunnels CLI: For details, see Enable dev tunnel. Azure OpenAI Resource: Set up an Azure OpenAI resource by following the instructions in Create and deploy an Azure OpenAI Service resource. Azure OpenAI Service Model: To use this sample, you must have the GPT-4o-Realtime-Preview model deployed. Follow the instructions at GPT-4o Realtime API for speech and audio (Preview) to set it up. Development Environment: Familiarity with .NET and basic asynchronous programming. Clone the quick start sample application: You can find the quick start at Azure Communication Services Call Automation and Azure OpenAI Service. git clone https://github.com/Azure-Samples/communication-services-dotnet-quickstarts.git After completing the prerequisites, open the cloned project and follow these setup steps. Environment Setup Before running this sample, you need to set up the previously mentioned resources with the following configuration updates: Setup and host your Azure dev tunnel Azure Dev tunnels is an Azure service that enables you to expose locally hosted web services to the internet. Use the following commands to connect your local development environment to the public internet. This creates a tunnel with a persistent endpoint URL and enables anonymous access. We use this endpoint to notify your application of calling events from the Azure Communication Services Call Automation service. devtunnel create --allow-anonymous devtunnel port create -p 5165 devtunnel host 2. Navigate to the quick startCallAutomation_AzOpenAI_Voice from the project you cloned. 3. Add the required API keys and endpoints Open the appsettings.json file and add values for the following settings: DevTunnelUri: Your dev tunnel endpoint AcsConnectionString: Azure Communication Services resource connection string AzureOpenAIServiceKey: OpenAI Service Key AzureOpenAIServiceEndpoint: OpenAI Service Endpoint AzureOpenAIDeploymentModelName: OpenAI Model name Run the Application Ensure your AzureDevTunnel URI is active and points to the correct port of your localhost application. Run the command dotnet run to build and run the sample application. Register an Event Grid Webhook for the IncomingCall Event that points to your DevTunnel URI (https://<your-devtunnel-uri/api/incomingCall>). For more information, see Incoming call concepts. Test the app Once the application is running: Call your Azure Communication Services number: Dial the number set up in your Azure Communication Services resource. A voice agent answer, enabling you to converse naturally. View the transcription: See a live transcription in the console window. QuickStart Walkthrough Now that the app is running and testable, let’s explore the quick start code snippet and how to use the new APIs. Within theprogram.cs file, the endpoint /api/incomingCall, handles inbound calls. app.MapPost("/api/incomingCall", async ( [FromBody] EventGridEvent[] eventGridEvents, ILogger<Program> logger) => { foreach (var eventGridEvent in eventGridEvents) { Console.WriteLine($"Incoming Call event received."); // Handle system events if (eventGridEvent.TryGetSystemEventData(out object eventData)) { // Handle the subscription validation event. if (eventData is SubscriptionValidationEventData subscriptionValidationEventData) { var responseData = new SubscriptionValidationResponse { ValidationResponse = subscriptionValidationEventData.ValidationCode }; return Results.Ok(responseData); } } var jsonObject = Helper.GetJsonObject(eventGridEvent.Data); var callerId = Helper.GetCallerId(jsonObject); var incomingCallContext = Helper.GetIncomingCallContext(jsonObject); var callbackUri = new Uri(new Uri(appBaseUrl), $"/api/callbacks/{Guid.NewGuid()}?callerId={callerId}"); logger.LogInformation($"Callback Url: {callbackUri}"); var websocketUri = appBaseUrl.Replace("https", "wss") + "/ws"; logger.LogInformation($"WebSocket Url: {callbackUri}"); var mediaStreamingOptions = new MediaStreamingOptions( new Uri(websocketUri), MediaStreamingContent.Audio, MediaStreamingAudioChannel.Mixed, startMediaStreaming: true ) { EnableBidirectional = true, AudioFormat = AudioFormat.Pcm24KMono }; var options = new AnswerCallOptions(incomingCallContext, callbackUri) { MediaStreamingOptions = mediaStreamingOptions, }; AnswerCallResult answerCallResult = await client.AnswerCallAsync(options); logger.LogInformation($"Answered call for connection id: {answerCallResult.CallConnection.CallConnectionId}"); } return Results.Ok(); }); In the preceding code, MediaStreamingOptions encapsulates all the configurations for bidirectional streaming. WebSocketUri: We use the dev tunnel URI with the WebSocket protocol, appending the path /ws. This path manages the WebSocket messages. MediaStreamingContent: The current version of the API supports only audio. Audio Channel: Supported formats include: Mixed: Contains the combined audio streams of all participants on the call, flattened into one stream. Unmixed: Contains a single audio stream per participant per channel, with support for up to four channels for the most dominant speakers at any given time. You also get a participantRawID to identify the speaker. StartMediaStreaming: This flag, when set to true, enables the bidirectional stream automatically once the call is established. EnableBidirectional: This enables audio sending and receiving. By default, it only receives audio data from Azure Communication Services to your application. AudioFormat: This can be either 16k pulse code modulation (PCM) mono or 24k PCM mono. Once you configure all these settings, you need to pass them to AnswerCallOptions. Now that the call is established, let's dive into the part for handling WebSocket messages. This code snippet handles the audio data received over the WebSocket. The WebSocket's path is specified as/ws, which corresponds to the WebSocketUri provided in the configuration. app.Use(async (context, next) => { if (context.Request.Path == "/ws") { if (context.WebSockets.IsWebSocketRequest) { try { var webSocket = await context.WebSockets.AcceptWebSocketAsync(); var mediaService = new AcsMediaStreamingHandler(webSocket, builder.Configuration); // Set the single WebSocket connection await mediaService.ProcessWebSocketAsync(); } catch (Exception ex) { Console.WriteLine($"Exception received {ex}"); } } else { context.Response.StatusCode = StatusCodes.Status400BadRequest; } } else { await next(context); } }); The method await mediaService.ProcessWebSocketAsync() processesg all incoming messages. The method establishes a connection with OpenAI, initiates a conversation session, and waits for a response from OpenAI. This method ensures seamless communication between the application and OpenAI, enabling real-time audio data processing and interaction. // Method to receive messages from WebSocket public async Task ProcessWebSocketAsync() { if (m_webSocket == null) { return; } // Start forwarder to AI model m_aiServiceHandler = new AzureOpenAIService(this, m_configuration); try { m_aiServiceHandler.StartConversation(); await StartReceivingFromAcsMediaWebSocket(); } catch (Exception ex) { Console.WriteLine($"Exception -> {ex}"); } finally { m_aiServiceHandler.Close(); this.Close(); } } Once the application receives data from Azure Communication Services, it parses the incoming JSON payload to extract the audio data segment. The application then forwards the segment to OpenAI for further processing. The parsing ensures data integrity ibefore sending it to OpenAI for analysis. // Receive messages from WebSocket private async Task StartReceivingFromAcsMediaWebSocket() { if (m_webSocket == null) { return; } try { while (m_webSocket.State == WebSocketState.Open || m_webSocket.State == WebSocketState.Closed) { byte[] receiveBuffer = new byte; WebSocketReceiveResult receiveResult = await m_webSocket.ReceiveAsync(new ArraySegment(receiveBuffer), m_cts.Token); if (receiveResult.MessageType != WebSocketMessageType.Close) { string data = Encoding.UTF8.GetString(receiveBuffer).TrimEnd('\0'); await WriteToAzOpenAIServiceInputStream(data); } } } catch (Exception ex) { Console.WriteLine($"Exception -> {ex}"); } } Here is how the application parses and forwards the data segment to OpenAI using the established session: private async Task WriteToAzOpenAIServiceInputStream(string data) { var input = StreamingData.Parse(data); if (input is AudioData audioData) { using (var ms = new MemoryStream(audioData.Data)) { await m_aiServiceHandler.SendAudioToExternalAI(ms); } } } Once the application receives the response from OpenAI, it formats the data to be forwarded to Azure Communication Services and relays the response in the call. If the application detects voice activity while OpenAI is talking, it sends a barge-in message to Azure Communication Services to manage the voice playing in the call. // Loop and wait for the AI response private async Task GetOpenAiStreamResponseAsync() { try { await m_aiSession.StartResponseAsync(); await foreach (ConversationUpdate update in m_aiSession.ReceiveUpdatesAsync(m_cts.Token)) { if (update is ConversationSessionStartedUpdate sessionStartedUpdate) { Console.WriteLine($"<<< Session started. ID: {sessionStartedUpdate.SessionId}"); Console.WriteLine(); } if (update is ConversationInputSpeechStartedUpdate speechStartedUpdate) { Console.WriteLine($" -- Voice activity detection started at {speechStartedUpdate.AudioStartTime} ms"); // Barge-in, send stop audio var jsonString = OutStreamingData.GetStopAudioForOutbound(); await m_mediaStreaming.SendMessageAsync(jsonString); } if (update is ConversationInputSpeechFinishedUpdate speechFinishedUpdate) { Console.WriteLine($" -- Voice activity detection ended at {speechFinishedUpdate.AudioEndTime} ms"); } if (update is ConversationItemStreamingStartedUpdate itemStartedUpdate) { Console.WriteLine($" -- Begin streaming of new item"); } // Audio transcript updates contain the incremental text matching the generated output audio. if (update is ConversationItemStreamingAudioTranscriptionFinishedUpdate outputTranscriptDeltaUpdate) { Console.Write(outputTranscriptDeltaUpdate.Transcript); } // Audio delta updates contain the incremental binary audio data of the generated output audio // matching the output audio format configured for the session. if (update is ConversationItemStreamingPartDeltaUpdate deltaUpdate) { if (deltaUpdate.AudioBytes != null) { var jsonString = OutStreamingData.GetAudioDataForOutbound(deltaUpdate.AudioBytes.ToArray()); await m_mediaStreaming.SendMessageAsync(jsonString); } } if (update is ConversationItemStreamingTextFinishedUpdate itemFinishedUpdate) { Console.WriteLine(); Console.WriteLine($" -- Item streaming finished, response_id={itemFinishedUpdate.ResponseId}"); } if (update is ConversationInputTranscriptionFinishedUpdate transcriptionCompletedUpdate) { Console.WriteLine(); Console.WriteLine($" -- User audio transcript: {transcriptionCompletedUpdate.Transcript}"); Console.WriteLine(); } if (update is ConversationResponseFinishedUpdate turnFinishedUpdate) { Console.WriteLine($" -- Model turn generation finished. Status: {turnFinishedUpdate.Status}"); } if (update is ConversationErrorUpdate errorUpdate) { Console.WriteLine(); Console.WriteLine($"ERROR: {errorUpdate.Message}"); break; } } } catch (OperationCanceledException e) { Console.WriteLine($"{nameof(OperationCanceledException)} thrown with message: {e.Message}"); } catch (Exception ex) { Console.WriteLine($"Exception during AI streaming -> {ex}"); } } Once the data is prepared for Azure Communication Services, the application sends the data over the WebSocket: public async Task SendMessageAsync(string message) { if (m_webSocket?.State == WebSocketState.Open) { byte[] jsonBytes = Encoding.UTF8.GetBytes(message); // Send the PCM audio chunk over WebSocket await m_webSocket.SendAsync(new ArraySegment<byte>(jsonBytes), WebSocketMessageType.Text, endOfMessage: true, CancellationToken.None); } } This wraps up our QuickStart overview. We hope you create outstanding voice agents with the new audio streaming APIs. Happy coding! For more information about Azure Communication Services bidirectional audio streaming APIs , check out: GPT-4o Realtime API for speech and audio (Preview) Audio streaming overview - audio subscription Quickstart - Server-side Audio StreamingCan MS Purview mask data in CE
Hi Can MS Purview enable data masking in Dynamics Customer Engagement / Service, If yes how this can be achieved? if No, Can we expect this feature in near future? Note: We would not enable any mask (Field Security Profile) features directly in CE, would like to happen using MS Purview34Views0likes0CommentsSend emails via SMTP relay with Azure Communication Services
We’ve come across multiple cases where customers want to send emails from applications migrated to Azure through some kind of SMTP service. Though we’ve seen customers opting for O365 for SMTP relay, this can create issues due to throttling limitations in Office Service. Also, managing mailbox and license assignment on Office 365 console is a different story; customers would want to have seamless SMTP relay service experience from single console on Azure. In scenarios where you don’t want to modify code and just change the pointing of your SMTP server to Azure, you can now use SMTP relay built into Azure Communication Services' Email. Azure Communication Services supports different types of notifications, and this blog post offers simple step by step instructions for how you can quickly test and then migrate from other services you’re using to native to Azure for better operational experience and support. Create Azure Communication Services Resource First step you’ll need to do is to create a Communication Services resource from the Azure portal. This is a parent service which has multiple notification services inside it (Chat, SMS, Email, etc). Email is one of them. Create an Email resource Add a custom domain Azure Communication Services Email will provide a default domain that looks like this “GUID.azurecomm.net” and allows for a limited volume of email, so if you need more volume limits, we recommend creating a custom domain. Once you add a custom domain, the UI provides you with a TXT file which you’ll need to create in your Name server. This would take 15 minutes to verify the domain Once the domain is verified, create SPF and DKIM records so that your email doesn’t land in junk and ownership is maintained. Once all the records are created the screen would look like this, please ignore the Azure managed domain. You can only have custom domain in the account and doesn’t have to add Azure domain explicitly. Attach custom domain Once the custom email domain is validated, we’ll need to attach the Email service to the Azure Communication Services resource. Create and assign custom RBAC Role for Authentication We’ll be using 587 port to send email which is authenticated SMTP. For authentication we have Entra ID authentication. Create a service principal by going to Entra ID – App registration page. Register the app and create a client secret. Note down Client ID, Tenant ID and Secret value. This will be used in next stage for authentication. We’ll need to create a custom RBAC role which has permission to send email. We’ll clone reader role. And we’ll be adding two actions which is present in Azure Communication service resource provider. Once the Role is created we’ll need to assign this to service principal Test SMTP Relay via Powershell That’s all, now you’ll need to find out the sender email. Which is default DoNotReply@domain.com You’ll need credentials to authenticate to the service. Username is < Azure Communication Services Resource name>. < Entra Application ID>. < Entra Tenant ID> Password is the client secret which you’ve generated. Port that we’ll need to use is 587 SMTP server address is smtp.azurecomm.net Now you can use any third party application to send email via the above parameters. To showcase we can use PowerShell with the same parameters to send emails. Conclusion: I trust this guide helps you in configuring SMTP relay and send emails from your custom application without any change to the application/code. Happy Learning!19KViews2likes10CommentsIgnite 2024: Bidirectional real-time audio streaming with Azure Communication Services
Today at Microsoft Ignite, we are excited to announce the upcoming preview of bidirectional audio streaming for Azure Communication Services Call Automation SDK, which unlocks new possibilities for developers and businesses. This capability results in seamless, low-latency, real-time communication when integrated with services like Azure Open AI and the real-time voice APIs, significantly enhancing how businesses can build and deploy conversational AI solutions. With the advent of new AI technologies, companies are developing solutions to reduce customer wait times and improve the overall customer experience. To achieve this, many businesses are turning to AI-powered agents. These AI-based agents must be capable of having conversations with customers in a human-like manner while maintaining very low latencies to ensure smooth interactions. This is especially critical in the voice channel, where any delay can significantly impact the fluidity and natural feel of the conversation. With bidirectional streaming, businesses can now elevate their voice solutions to low-latency, human-like, interactive conversational AI agents. Our bidirectional streaming APIs enable developers to stream audio from an ongoing call on Azure Communication Services to their web server in real-time. On the server, powerful language models interpret the caller's query and stream the responses back to the caller. All this is accomplished while maintaining low latency, ensuring the caller feels like they are speaking to a human. One such example of this would be to take the audio streams and processing them through Azure Open AI’s real-time voice API and then streaming the responses back into the call. With the integration of bidirectional streaming into Azure Communication Services Call Automation SDK, developers have new tools to innovate: Leverage conversational AI Solutions: Develop sophisticated customer support virtual agents that can interact with customers in real-time, providing immediate responses and solutions. Personalized customer experiences:By harnessing real-time data, businesses can offer more personalized and dynamic customer interactions in real-time, leading to increased satisfaction and loyalty. Reduce wait times for customers:By using bidirectional audio streams in combination with Large Language Models (LLMs) you can build virtual agents that can be the first point of contact for customers reducing the need for customers waiting for a human agent being available. Integrating with real-time voice-based Large Language Models (LLMs) With the advancements in voice based LLMs, developers want to take advantage of services like bidirectional streaming and send audio directly between the caller and the LLM. Today we’ll show you how you can start audio streaming through Azure Communication Services. Developers can start bidirectional streaming at the time of answering the call by providing the WebSocket URL. //Answer call with bidirectional streaming websocketUri = appBaseUrl.Replace("https", "wss") + "/ws"; var options = new AnswerCallOptions(incomingCallContext, callbackUri) { MediaStreamingOptions = new MediaStreamingOptions( transportUri: new Uri(websocketUri), contentType: MediaStreamingContent.Audio, audioChannelType: MediaStreamingAudioChannel.Mixed, startMediaStreaming: true) { EnableBidirectional = true, AudioFormat = AudioFormat.Pcm24KMono } }; At the same time, you should open your connection with Azure Open AI real-time voice API. Once the WebSocket connection is setup, Azure Communication Services starts streaming audio to your webserver. From there you can relay the audio to Azure Open AI voice and vice versa. Once the LLM reasons over the content provided in the audio it streams audio to your service which you can stream back into the Azure Communication Services call. (More information about how to set this up will be made available after Ignite) //Receiving streaming data from Azure Communication Services over websocket private async Task StartReceivingFromAcsMediaWebSocket() { if (m_webSocket == null) return; try { while (m_webSocket.State == WebSocketState.Open || m_webSocket.State == WebSocketState.Closed) { byte[] receiveBuffer = new byte[2048]; WebSocketReceiveResult receiveResult = await m_webSocket.ReceiveAsync(new ArraySegment<byte>(receiveBuffer), m_cts.Token); if (receiveResult.MessageType == WebSocketMessageType.Close) continue; var data = Encoding.UTF8.GetString(receiveBuffer).TrimEnd('\0'); if(StreamingData.Parse(data) is AudioData audioData) { using var ms = new MemoryStream(audioData.Data); await m_aiServiceHandler.SendAudioToExternalAI(ms); } } } catch (Exception ex) { Console.WriteLine($"Exception -> {ex}"); } } Streaming audio data back into Azure Communication Services //create and serialize streaming data private void ConvertToAcsAudioPacketAndForward( byte[] audioData ) { var audio = new OutStreamingData(MediaKind.AudioData) { AudioData = new AudioData(audioData) }; // Serialize the JSON object to a string string jsonString = System.Text.Json.JsonSerializer.Serialize<OutStreamingData>(audio); // Queue the async operation for later execution try { m_channel.Writer.TryWrite(async () => await m_mediaStreaming.SendMessageAsync(data)); } catch (Exception ex) { Console.WriteLine($"\"Exception received on ReceiveAudioForOutBound {ex}"); } } //Send encoded data over the websocket to Azure Communication Services public async Task SendMessageAsync(string message) { if (m_webSocket?.State == WebSocketState.Open) { byte[] jsonBytes = Encoding.UTF8.GetBytes(message); // Send the PCM audio chunk over WebSocket await m_webSocket.SendAsync(new ArraySegment<byte>(jsonBytes), WebSocketMessageType.Text, endOfMessage: true, CancellationToken.None); } } To reduce developer overhead when integrating with voice-based LLMs, Azure Communication Services supports a new sample rate of 24Khz, eliminating the need for developers to resample audio data and helping preserve audio quality in the process Next steps The SDK and documentation will be availablein the next few weeks after this announcement, offering tools and information to integrate bidirectional streaming and utilize voice-based LLMs in your applications. Stay tuned and check our blog for updates!Anywhere365 integrates Azure Communication Services into their Dialogue Cloud Platform
Anywhere365 incorporates Azure Communication Services into its cloud contact center offering alongside integration with Microsoft Teams. The outcome of this helps businesses achieve more with AI-powered intelligent B2C communication solutions.2.4KViews3likes0CommentsHelp customers reach your organization with click-to-call for Microsoft Teams Phone
Click-to-call for Teams Phone is generally available. Now you can enable your customers to easily reach your organization – for example sales and support teams - directly from your webpage with a single click.5.7KViews2likes0CommentsHow to Use Artificial Intelligence with Azure Communication Services
As artificial intelligence (AI) becomes more sophisticated, the possibilities of how it can be used to enhance your communication applications are limited only by your imagination.With artificial intelligence, your Azure Communication Services communication applications can provide speech recognition, natural language processing, sentiment analysis, translation and transcription, and so much more...