Build GPT-automated customer support with Azure Communication Services
Published Sep 25 2023 08:55 AM 4,983 Views
Bronze Contributor

Combine Azure Communication Services, the same platform that runs Microsoft Teams, with the Azure OpenAI service for generative AI using GPT. Automate and transform your customer service interactions with faster and informed human-like responses, whether text-based through BOTs or through integrated voice channels. Provide a seamless escalation path for agents, with the context and precise information they need to rapidly and effectively respond to escalations.

Main.png

Bob Serr, Azure Communication Services VP, joins Jeremy Chapman to share how to build GPT-automated customer support with Azure Communication Services.

 

Switch from text to voice instantly without losing context.

1.png

How to integrate generative AI into your communication experiences. Watch this demo.

 

No more repetitive questions.

2.png

Technicians have access to your entire chat history, for a more informed and efficient conversation. Build GPT-automated customer support with Azure Communication Services.

 

Get expert answers.

3.png

Conversational AI provides intelligent responses to queries in real-time — even if the technician isn’t a subject matter expert. See how it works behind the scenes.

 

Watch our video here.

QUICK LINKS:

00:00 — Combine Azure AI and Azure Communication Services

01:02 — What is Azure Communication Services?

02:20 — Developer advantages

03:22 — Demo- customer experience

06:32 — Demo- technician experience

08:16 — See how it works behind the scenes

10:00 — How to get it up and running

12:18 — Wrap up

 

Link References

Get core services up and running at https://aka.ms/ACSAIsampleapp

For more information, check out https://aka.ms/ACSdocs

 

Unfamiliar with Microsoft Mechanics?

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

 

Keep getting this insider knowledge, join us on social:


Video Transcript:

- Up next, we have a special show with a very practical use of generative AI with a look at what happens when you combine Azure Communication Services, the same platform that runs Microsoft Teams, with the Azure OpenAI service for generative AI using GPT for more informed natural language communications. Now today, we’re going to show you how this can be used to automate and transform your customer service interactions with faster and informed human-like responses, whether text-based through BOTs, or even integrated through voice channels, all while providing a seamless escalation path for your agents with the context, the precise information, and AI Copilot capabilities they need to rapidly and effectively respond to escalations. And joining me today to walk through all of this is Bob Serr, who’s the leader of the Azure Communication Services team. Welcome.

 

- Thanks Jeremy.

 

- I appreciate it. This is a super exciting space and I’m really excited to talk about what we’ve been up to here.

 

- It really is. Combining generative AI with Azure Communication Services is going to be transformative, whether you’re a customer or a service provider. But before we go there, why don’t we take a moment to explain what Azure Communication Services is?

 

- Yeah, sounds good. So as you mentioned, Azure Communication Services is the same underlying platform and infrastructure used to deliver Microsoft Teams globally to more than 300 million active users today. As a developer, Azure Communication Services gives you API level access to the same rich set of capabilities, whether it’s integrated voice or video, text-based chat, in addition to SMS-based mobile messaging and even email options. The nice thing about building these capabilities into your own standalone mobile apps and websites is that the person you’re communicating with doesn’t need to leave your experience for another app. They can stay in the context of your branded app or website.

 

- Right. I can see this being really useful for things like financial services or healthcare and retail, or you might want to build out your own virtual engagements and appointments or even your own contact center infrastructure.

 

- Yeah, that’s exactly right. So over the last couple of years, we’ve seen a ton of great examples from our customers here. We have customers branding solutions to run things like video and voice meetings. Everything from video banking to managing appointments between customers and financial advisors, to remote training for lifesaving resuscitation procedures. Core communication capabilities can be integrated into your app experience so there’s no requirement for your customers, for example, to install and sign in to apps, like Microsoft Teams or Zoom in order to engage with you. They just stay within your own app experience.

 

- Right. Because Azure Communication Services is part of Azure, there are even more advantages as a developer then for easier scaling and also incorporating more intelligence into those experiences.

 

- Yeah, there are. There’s a whole spectrum of advantages. You can easily connect to any number of data backends or frontend capabilities, from containers to Azure’s Web App service, and scale across our global network backbone to deliver custom app experiences to just about any device in a scalable, reliable, and secure way. You can also add various intelligence skills by incorporating the prebuilt AI models in the Azure AI service. For example, you can combine Search with Azure AI’s GPT large language model. This lets you create richer, natural language, generative AI experiences by working with your organization’s specific knowledge base in order to generate custom and informed responses. And it’s really this fusion of our communications platform along with Azure services and AI that lets you build B2B or B2C solutions tailored to your specific operations.

 

- Excellent. So I think now, we’ve brought everyone up to speed in terms of what the service is. Why don’t we get into this with an example of how you might integrate generative AI, like we’ve seen with things like Copilot and ChatGPT, into your own communication experiences?

 

- Yeah, that sounds great. So there’s nothing more frustrating than a chat or a call to a support center where the service representative doesn’t have all the information. And we can all probably relate to calls where you as the requester, might have more information than the person who’s helping you.

 

- Yeah, it’s one of the reasons why I really dread calling the help desk. One day, you might be lucky with someone who’s been around for a while and they can answer your specific question. The next time you call and you may be greeted by someone who’s new to the job or who’s less helpful and they need to escalate you to another tier of support.

 

- Yeah, we hear that all the time. So let me walk through an example of a fully-contained generative AI experience within a public-facing website for an organization in the services industry, which happens to specialize in green energy and solar power. And by the way, what I’m going to show you is an open source sample app that you can find on GitHub at aka.ms/ACSAIsampleapp, which means you can follow along and experiment with it later. I’m going to start in reverse, by showing you the customer experience once it’s running and I’ll show you how it works and the code behind it after. Here, we’re seeing a text-based conversation with our BOT. Notice how the conversation flow is a very natural back and forth dialogue. This is a fairly typical conversation about solar panel feasibility and benefits, and the BOT is providing some informative answers because it’s pulling directly from our knowledge base and formulating responses in real time. So it’s natural and not rigid, like some of the legacy IVR solutions out there. Of course, this is text-based and requires the customer to type. Now, what if I just wanted to pick up the phone to verbalize my questions on a call versus typing?

 

- That’s right. You don’t want to be tethered to your screen. And if you’ve got a mobile device, for example, it might not be the easiest thing to type on either.

 

- Yeah, right. So in this case with a simple statement like, “It might be easier to describe everything over the phone,” you can see the BOT can initiate a voice call. It asks for a phone number, and there’s the area code and number. And this callback approach can be instantaneous. As the customer, you don’t need to restart the conversation from scratch because all of the context is there.

 

- The nice thing is you don’t have to navigate things like IVR and you don’t have to wait on hold for long periods of time.

 

- Exactly. None of that. Now, I’m going to take over for a minute to show you the voice interaction live, using generative AI along with the speech service over a standard PSTN, or telephone connection. So there’s my phone call. Hi, this is Bob.

 

- [Bot] This is Contoso Energy. Let’s pick up our conversation where we left off. Can you tell me more about your house’s location and the extent of the tree obstruction on the northeast corner of your roof?

 

- Yes. So I’m in Redmond, Washington, and the house was built 10 years ago with a composite roof. It’s pretty well exposed to natural daylight.

 

- [Bot] A solar installer can help you evaluate your property and determine the best options for solar panel installation, taking into account the tree obstruction and other factors.

 

- Can you connect me with someone who can help?

 

- [Bot] Certainly. I can connect you with a technician in your area. I will send you a link.

 

- Sounds good. Go ahead and send the link over. Okay, I got it. So now I can just tap on this link and that’s going to open up the browser on my phone and take me directly to a web app powered by Azure Communication Services. So Jeremy, why don’t you show us a technician experience on the other end?

 

- Great. So as the technician, I can see the voice and text chat and all that history until now with the BOT. Now, it also includes a nice, easy-to-follow summary that was automatically generated by the Azure OpenAI service. And importantly, because I have everything right here in front of me, I don’t need to ask you the same questions as before. I have all the context I need. And because Bob’s connected on his phone, a technician like me then can ask to see things as well. For example, right now, he’s showing a model of his house. We’re not outside, so this is the best we can do. But you can imagine that if we were outside, he could also use the camera maybe to share the view of his tree lines, around the house, or also visual information, maybe like power hookups. And all this can really help inform the conversation even more. Then during the call or once it’s completed, the technician can also look up internal knowledge-based questions using natural language. And if I wanted to do that, it will just act like a Copilot as I work for any questions that I personally can’t answer. Now, if I’m not a solar panel expert, and I’m not, I can use the AI to quickly get intelligent answers. And once the call’s finished, if I hit, “Send Summary,” it’s even going to generate a personalized summary email with everything we discussed on the call and even next steps to save even more time.

 

- Yeah. So, in addition to the time savings and efficiency savings, the one-to-one video call creates a better connection between the technician and me to build trust and discuss follow-up actions and timelines. So, after the BOT takes care of the easier-to-gather initial details, the technician is able to seamlessly take things further.

 

- Right. In this example, we were able to cover a lot of ground with just one call. And something like this scenario might have previously involved multiple calls. And like you mentioned, when you did get the technician on the phone, me in this case, I could jump right into the important details. So what was behind everything to make what we just saw possible?

 

- Yeah, of course. There’s a lot going on under the hood. We used Azure Communication Services to deliver the SMS message, the rich voice interaction, text-based chat and video sharing experiences. And for natural language speech-to-text and vice versa, it’s using our call automation service to convert everything to you and from text. Next, as mentioned, we’ve layered in the Azure OpenAI service, which is running its own instance of the GPT large language model behind the scenes. Azure Cognitive Search is used each time the customer enters a question to retrieve information from our knowledge base with the right permissions. The retrieved information and the customer’s question, as prompts, is combined along with additional guidance using a system prompt to tailor the response length and tone to suit the chat experience on our website. And all of this is presented to the large language model to generate an answer.

 

- Okay, so that covers everything that we saw with the BOT. Now, how did you transfer the call then to the technician?

 

- So, to facilitate the call escalation, we leverage the job router in Azure Communication Services to assign the job to the most suitable technician with the right skills, availability, and in the corresponding area. Then as you’ll recall, the technician was sent an entire log of the conversation, along with a summary up until that point, all generated using this information to prompt the LLM. And the LLM in the same way was used to author the summary email after our conversation. And nothing used in any of these interactions or any of your data will ever be used to train foundational AI models. The GPT large language models run as separate instances in Azure with all the security, privacy and governance controls across the Azure platform.

 

- And Azure Communication Services then added text-based chat, voice services over both PSTN and VoIP, the job router, as you mentioned, to connect to the tech, and video sharing between you, the customer, and me, the technician. So why don’t we dig into the app itself and explain what you’d have to do as a developer to get everything running?

 

- Sounds great. So we used quite a few Azure and AI services in our example, but to be clear, it’s a lot less work to bring these services together for something like this where otherwise, you may have needed to build your own communications platform, AI models, frontends and backends yourself. Let me walk through a few of the core components to set everything up. You’ll start with the code sample we’ve released and you can find it on GitHub at aka.ms/ACSAIsampleapp. It’s got everything you’ll need to build a running app using Azure Communication Services and the Azure OpenAI service. And this is just a sample, so you can easily customize it to meet your specific scenario. Let me show you an example. Using the Azure AI Studio’s Chat Playground, you can easily start experimenting before you commit the elements to your code. Here, you can see that I have the system message set up as standard text to append to any user prompt. It has a few basic rules set up, like how the assistant should respond and what to do if it can’t respond. We have a few of the exchanges from before in the examples. And on the right side configuration, you can see that we’ve set a few parameters for the response, like the max response length, temperature, which controls a level of randomness or creativity in the generated text, and Top P, which is a probability threshold used to determine which words are used. I can experiment with this. Let’s say I want to be able to send a summary using SMS text messaging to the customer to recap our conversation instead of the email that’s included in the sample. I’ll start by prompting the assistant to summarize the content as a very short text message summary. You’ll see that as this scrolls down, this was my second attempt to get the prompt in response to where I wanted it. Then once I feel good about my prompt, I can move it over to my code. In this case, we’re using Python, and I just need to paste the prompt in place of the email one. And since I’m optimizing for SMS text-based messaging, I can replace the email message target to be an SMS client, then add the correct values for the sending and receiving phone number variables. And with just a few simple changes, I’ve tailored the sample app, in this case, to send a text message instead of an email. So we didn’t have time to go through every service and line of code today, but everything I demonstrated today is possible with the current set of available services.

 

- It’s really great seeing all the updates and all the possibilities with Azure Communication Services, as well as how you can really supercharge it using generative AI, with my favorite part, even using voice. So, for anyone who’s watching and looking to build something similar, like you demonstrated today, what do you recommend?

 

- Best way to get started is to roll up your sleeves and start building. Deploy the sample we demonstrated today from GitHub at aka.ms/ACSAIsampleapp. That way you can get all the core services up and running quickly, and for more info and ideas, check out aka.ms/ACSdocs.

 

- Thanks so much, Bob, for joining us today and showing us all the possibilities. Of course, keep watching Microsoft Mechanics for the latest tech updates. Subscribe to our channel, if you haven’t already. And as always, thank you for watching.

Version history
Last update:
‎Sep 25 2023 08:55 AM
Updated by: