Now, more than ever, developers need to respond to the rapidly increasing demand from customers for support and accurate information - meeting them where they are – any time of the day and on an expanding range of platforms and devices. Within just the last few weeks, Azure AI has met unprecedented demand, underpinning over 1500 Covid-19 related bots via the Microsoft Health Bot service alone, in addition to the over 1.25 billion messages per month already handled by Azure Bot Service.
As part of our key updates for Build 2020, we continue to improve the developer experience and answer the evolving needs of enterprises looking to implement conversational experiences, both employee and customer facing. Significant announcements include the general availability (GA) of Bot Framework Composer, an integrated development tool for building conversational experiences, and the Virtual Assistant solution, an open source solution for building a branded virtual assistant. Azure Bot Service brings a public preview of Alexa integration along with new capabilities for the Language Understanding, Speech and QnA Maker Cognitive Services, including the general availability for container support.
Now generally available, Bot Framework Composer is a new open source, visual authoring canvas for developers to design and build conversational experiences. Composer focuses the bot creation process more on conversation design and less on the scaffolding required to begin building awesome bots. Composer easily brings together the common components required to build bots such as the ability to define Language Understanding models, integrate with QnA Maker and build sophisticated composition of bot replies using Language Generation.
Composer also supports building Bot Framework Skills (bots that can perform a set of tasks for another bot allowing for re-usability and componentizing bot solutions as their complexity and surface area increases. Skills built with Composer can be consumed by other bots built with Composer or using the Bot Framework SDK, as well as from Power Virtual Agents.
Find out more and get started with Composer at https://aka.ms/bfcomposer.
We are also excited to make Adaptive Dialogs generally available! Adaptive Dialogs, which underpin the dialog design and management in Composer, enable developers to dynamically update conversation flow based on context and events. This is especially useful when dealing with more sophisticated conversation requirements, such as context switches and interruptions (link to adaptive docs?). Bot Framework Skills can now also leverage Adaptive Dialogs.
Also available in an early preview is new Generated Dialog tools. These new tools can automatically create robust Bot Framework Composer assets from JSON or JSON Schema that implement best practices like out-of-order slot filling, ambiguity resolution, help, cancel, correction and list manipulation.
Other significant updates include a Developer Preview of Single Sign-On (SSO) capabilities in Microsoft Teams, answering a common requirement from our customers and, ultimately, reducing friction for end users. A new Health Check API allows for monitoring of bots in Production environments.
For more details of all of the changes in this latest release, see the version 4.9 release notes.
The Direct Line App Service Extension is now generally available and enables customers to have even greater control over how data is stored and transmitted within their bot using Direct Line or Webchat. Often customers in industries such as banking, medical, legal and others deploy their solution into Virtual Networks (VNETS) which provides networking isolation capabilities. With the Direct Line App Service Extension (Direct Line-ASE), they can now deploy their bot inside the VNET and connect directly to their users’ clients rather than data passing through shared cloud infrastructure. In addition, Direct Line-ASE uses web sockets for communication between client and bot which can reduce latency as well.
A common scenario, repeatedly encountered by our customers, is the need for human handoff as part of a conversation where it is most appropriate, or a customer explicitly asks to speak to a human. Implementing such scenarios within your own bot could, historically, be complex and we are aiming to reduce the implementation time from weeks to minutes by making pre-built integrations for popular customer service platforms available, including LivePerson and Microsoft Omnichannel. Plus, if an existing integration does not already exist developing your own is now much easier with Microsoft now providing common patterns, backed by updates to the Bot Framework SDK and protocol.
Whilst Azure Bot Service already provides a broad range of channels, customer requirements continue to evolve leading to demand for additional integrations. As part of responding to these demands, we are pleased to announce the public preview of a new channel for Amazon Alexa Skills, allowing you to build a bot that targets the popular home assistant platform, alongside the existing channels you already build for today. For more details on configuring the new Alexa channel preview, see the updated Bot Framework docs.
Now generally available, Virtual Assistant Solution Accelerator, has now reached version 1.0. Virtual Assistant allows developers to quickly stand up a fully functional Virtual Assistant that can be modified to be their own unique experience.
Virtual Assistant now has been fully moved over to Bot Framework Skills. Virtual Assistant Sample Skills have been moved into their own GitHub repository to allow for easier update to the Virtual Assistant Core as well as the Sample Skills that developers have used in their implementation. Virtual Assistant adds all of the items below together to provide the best starting point for developers looking to quickly provide a Bot that has the core components needed to be able to scale and work right out of the box.
Virtual Assistant has also added new capabilities as we move beyond v1.0 to allow developers to see how to leverage Bot Framework Composer to create skills for Virtual Assistant. This allows developers to unlock the power of Adaptive Dialogs. Virtual Assistant has added 3 new Preview Skills that are Composer / Adaptive versions (Calendar, To Do, Who).
Azure Cognitive Services brings AI within reach of every developer—without requiring machine-learning expertise. At Build 2020, we made several announcements related to new features and improvements across the Cognitive Services used within the Conversational AI eco-system.
The Speech service is broadening language coverage and updating Speech to Text and neural Text to Speech with significant accuracy improvements. Additional new capabilities such as custom commands and pronunciation assessment are making it easier for customers to embed advanced speech capabilities into their solutions.
The Language Understanding service has released a major update to the portal, with a dramatically improved labeling experience, making it easier than ever to build apps and bots that can understand the complex language people tend to use. For example, somebody ordering a pizza might say, “I want two large chicken deep-pan pizzas, a medium pizza with olives and a side of fries.”. This is a complex order, but using the new machine learned entity labelling and decomposition allows you to extract actionable data with ease (full order, quantities, toppings, modifiers and sides). The user has used two different language structures within the same order. This new portal makes it easier to break apart complex requests into related parts.
In addition, Language Understanding, as well as Text Analytics, can now be deployed from the cloud to the edge, with containers support for both services now generally available! For more detail, read the Language Understanding blog.
Bot Framework Orchestrator enters private preview and provides a transformer model based orchestration capability optimized for Conversational AI. This capability helps deliver improved accuracy of Skill based routing critical to more sophisticated conversational experiences, reduced latency and a multi-label classifier enabling multiple intents to be identified from utterances and processed individually. Moving forwards this capability will replace our current Dispatch capability.
The QnA Maker service also receives an update to the editing experience for QnA Knowledgebases, with the addition of rich text editor support, as well as the addition of Role Based Access Control (RBAC) allowing for greater control and governance of knowledgebase management.
We encourage you to find out more about our announcements via our Build sessions, either live or on demand afterwards. We also already have a number of on-demand content available now.
Build Breakout sessions
Conversational AI powered Customer and Employee Virtual Assistants
1st session Wednesday May 20th - 2:00 - 2:30 pm PST. Check the link below for all session times.
Accelerate bot development in Power Virtual Agents
1st session Tuesday May 19th - 3:00 - 3:30 pm PST. Check the link below for all session times.
Deploying Voice Assistants for driverless vehicles
On demand content
Bot Framework Composer: Bot Framework’s new collaborative Conversational AI development environment
Use the Efficiency of Low-Code with the Extensibility to Azure to Design World-Class Chatbots
Conversational AI and human agents working together
Author rich content in QnA Maker knowledge base and enable role based sharing
New features in Language Understanding
Self-Driving Vehicle Systems in a Post COVID-19 World
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.