Customers and partners have an increasing need to deliver advanced conversational assistant experiences tailored to their brand, personalized to their users, and made available across a broad range of canvases and devices. The Virtual Assistant Solution Accelerator answers this need and, with v1.0 released at Build 2020, is now generally available!
The solution accelerator is open source in GitHub and provides you with a set of core foundational capabilities and full customization over the end user experience - including the name, voice, and personality of your assistant – whilst not sacrificing control over privacy and data.
You can get started in minutes and extend rapidly, using pre-built reusable conversational Skills which cover common assistant use-cases, or develop your own skills using comprehensive end-to-end tooling such as Bot Framework Composer.
This article will provide a high-level overview of the Virtual Assistant Solution Accelerator, providing a good grasp of the key concepts. Over the coming weeks we will release additional articles, each providing a focused deep dive into each area.
The Virtual Assistant Core is the foundation of your solution, built on top of the latest Bot Framework SDK and integrated with Cognitive Services to provide the core assistant experience, such as Language Understanding, which is used for natural language understanding (NLU). Key features include.
Several Skills, covering common assistant scenarios, are available to plug-in to your assistant immediately – rapidly increasing the capability of your solution without the need to expend custom development effort. However, as with the core, Skills are fully customisable and make use of the same assets (dialogs, LU and LG files), allowing you to easily tailor them to suit your specific requirements.
The following Skills are currently available and they are pre-integrated with services such as the Microsoft Graph.
The following experimental / preview skills are also available.
It is crucial that you surface your assistant on the channels being used by your users, meeting them where they are. Via Azure Bot Service (ABS), you can connect your assistant to any channel currently supported by that service, including Microsoft Teams, web chat, Facebook Messenger, Slack and the new preview channel for Alexa Skills. Beyond the channels currently supported by ABS, you can also take advantage of available Bot Framework adapter implementations, allowing your assistant to accept request directly from other platforms such as WhatsApp, RingCentral, Google Assistant and Zoom.
We also recognize the need to be able to have devices such as phones, tablets, and other IOT devices (e.g. Cars, Alarm Clocks, etc.) as interfaces to interact with their users. To simplify this, a base Android application is available, including the following capabilities:
As we continue to grow our Virtual Assistant capabilities, as well as providing the ability to start with just the Core component, which does incorporate any pre-configured skills, we have seen the value in providing sample implementations for specific verticals, combining appropriate skills and channels to further accelerate the development of assistants within those industries.
The following assistant samples are currently available.
We have overhauled the documentation for Virtual Assistant, with a dedicated site, making it even easier to find information about the Virtual Assistant, its capabilities, customization and deployment. This site will always contain up to date information regarding the latest version of the solution, including steps you can take to migrate to the new releases, ensuring your assistant remains up to date.
To get started today building your own Virtual Assistant, you can find dedicated articles, for both C# and TypeScript, detailing the steps you need to take to get up and running within minutes, with end-to-end scripts to configure your assistant and deploy all of the required Azure resources.
We also provide sample implementations for both continuous integration (CI) and continuous deployment (CD) scenarios within Azure DevOps for both C# and TypeScript.
Looking to the future, now that v1.0 is generally available, we are focused on providing the ability for you to take advantage of the other significant capabilities within the Bot Framework eco-system at Build 2020, such as the ability to develop a virtual assistant (and its connected Skills) using Bot Framework Composer. A preview demonstrating Composer integration and the improved developer experience that comes with taking advantage of the new declarative dialog model that underpins it, can be found at https://aka.ms/bfskillsbuildpreview.
We are excited about hitting this milestone and can’t wait to see the solutions you build, to improve the experiences of your customers!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.