Build 2020 - Introducing Bot Framework Virtual Assistant 1.0
Published May 21 2020 01:39 AM 10.5K Views

Customers and partners have an increasing need to deliver advanced conversational assistant experiences tailored to their brand, personalized to their users, and made available across a broad range of canvases and devices. The Virtual Assistant Solution Accelerator answers this need and, with v1.0 released at Build 2020, is now generally available!


The solution accelerator is open source in GitHub and provides you with a set of core foundational capabilities and full customization over the end user experience - including the name, voice, and personality of your assistant – whilst not sacrificing control over privacy and data.




You can get started in minutes and extend rapidly, using pre-built reusable conversational Skills which cover common assistant use-cases, or develop your own skills using comprehensive end-to-end tooling such as Bot Framework Composer.


This article will provide a high-level overview of the Virtual Assistant Solution Accelerator, providing a good grasp of the key concepts. Over the coming weeks we will release additional articles, each providing a focused deep dive into each area.


Virtual Assistant Core

The Virtual Assistant Core is the foundation of your solution, built on top of the latest Bot Framework SDK and integrated with Cognitive Services to provide the core assistant experience, such as Language Understanding, which is used for natural language understanding (NLU). Key features include.


  • Common dialog implementations - for common assistant requirements, such as introduction, on-boarding experience, and handling situations where there is a need to hand off the conversation to a human. These base implementations include the language understanding models (.LU files) for recognizing user intents to trigger them (e.g. “I need to speak to a human”).

  • FAQ and Personality - allowing the bot to answer user questions, from FAQs made available from a QnA Maker knowledgebase, including taking advantage of the new multi-turn feature, in addition to making use of the Chit Chat personalities provided by the service, giving your assistant the ability to respond to common ‘small talk’, making it more engaging. Pre-built data sets are provided for professional, friendly, witty, caring and enthusiastic personalities and they are fully customisable.

  • Complex conversational capabilities, including interruption and context switching – interacting via natural language can be complex, but the Virtual Assistant handles common scenarios with ease, such as the ability for a user to switch context or interrupt their conversation, such as to a different skill, escalating to a human, asking for contextual help, going back to an earlier step or cancelling their current flow completely.

  • Multi-locale Language Generation (LG) support – The solution takes advantage of LG files (also made generally available at Build), which allow for more natural and dynamic responses. Using LG, you can provide variations for each of your responses, meaning that a conversation doesn’t feel static to a user who is regularly interacting with the assistant. You are also able to access state and in-memory data, allowing you to customize responses based on context. LG files are available in English, Spanish, French, German, Italian and Chinese, with the ability to easily add additional locales if required. Multi-locale support also extends to Language Understanding (LU) assets.

  • Speech support - Speech-first experiences can be enabled without any custom-code, responding to the evolving change in user behavior towards multi-modal experiences on a broad range of platforms and devices.

  • Telemetry, Logging and Analytics - A telemetry pipeline for Virtual Assistant, leveraging both PowerBI and Azure Application Insights. This enables you to quickly and easily understand how your assistant is being used by users and gain actionable insights to make tangible improvements. Automated logging of transcripts can also be enabled, allowing for deeper analysis at a later date or passing conversation history to a human agent when handing off. An explicit mechanism is also available to ask users for their feedback when they complete a scenario using the assistant.


Several Skills, covering common assistant scenarios, are available to plug-in to your assistant immediately – rapidly increasing the capability of your solution without the need to expend custom development effort. However, as with the core, Skills are fully customisable and make use of the same assets (dialogs, LU and LG files), allowing you to easily tailor them to suit your specific requirements.


The following Skills are currently available and they are pre-integrated with services such as the Microsoft Graph.

  • Calendar – providing calendar, meeting room booking and meeting management capabilities for users. E.g. “book a meeting with Darren tomorrow at 2pm at the Hyatt”. Using the Microsoft Graph, this skill is able to correctly search for an identify contacts without users needing to explicitly use their full name or email address.



  • Email – ability to compose, search, read, delete, and reply to email by interacting via natural language, connecting to an Office 365 or Google mailbox.

  • To do – provides task management capabilities to your assistant, allowing users to add, search, delete tasks and mark them as complete when they are done. As with the Calendar and Email Skills, Microsoft Graph integration is built in, bringing synchronization of tasks across platforms such as Microsoft ToDo, Planner and Outlook.

  • Point of Interest – users can find points of interest and directions by taking advantage of the integration with Azure Maps and FourSquare.


The following experimental / preview skills are also available.

  • Hospitality – allowing for experiences such as managing reservations, check out, and amenity requests.

  • IT Service Management (ITSM) – provides a basic skill that provides ticket and knowledge base related capabilities, with support for ServiceNow built-in.

  • Music – Features artists and playlist lookup for the popular music service Spotify. Playback information is then signalled back to the device through Events enabling native device playback.

Channels and Clients

It is crucial that you surface your assistant on the channels being used by your users, meeting them where they are. Via Azure Bot Service (ABS), you can connect your assistant to any channel currently supported by that service, including Microsoft Teams, web chat, Facebook Messenger, Slack and the new preview channel for Alexa Skills. Beyond the channels currently supported by ABS, you can also take advantage of available Bot Framework adapter implementations, allowing your assistant to accept request directly from other platforms such as WhatsApp, RingCentral, Google Assistant and Zoom.


We also recognize the need to be able to have devices such as phones, tablets, and other IOT devices (e.g. Cars, Alarm Clocks, etc.) as interfaces to interact with their users. To simplify this, a base Android application is available, including the following capabilities:


  • Can be set as default assistant on the device
  • Speech support via the Direct Line Speech service, including the ability to open and close the mic on the device
  • Ability to render Adaptive Cards
  • Consume events and engage with events from the local Android OS (navigation, phone dialer, etc.)
  • UI supporting threaded conversation
  • Light and dark mode support and easy customization of colors
  • Much more…

Assistant Samples

As we continue to grow our Virtual Assistant capabilities, as well as providing the ability to start with just the Core component, which does incorporate any pre-configured skills, we have seen the value in providing sample implementations for specific verticals, combining appropriate skills and channels to further accelerate the development of assistants within those industries.


The following assistant samples are currently available.

  • Enterprise Assistant - Microsoft has assembled a typical configuration of a Virtual Assistant that covers scenarios often required by our Enterprise customers implementing internal facing productivity assistants. This sample provides an implementation of a Virtual Assistant that includes pre-configured capabilities such as weather, news, calendar, to do, and ITSM. Single sign on support (SSO) for Azure Active Directory is also included.

  • Hospitality Assistant - A typical configuration of a Virtual Assistant that is targeted at the hospitality industry. This sample will provide an implementation of a Virtual Assistant that includes actions such as event information, POI finding, weather, news, hospitality (via the hospitality skill detailed above) etc.

Getting started and documentation

We have overhauled the documentation for Virtual Assistant, with a dedicated site, making it even easier to find information about the Virtual Assistant, its capabilities, customization and deployment. This site will always contain up to date information regarding the latest version of the solution, including steps you can take to migrate to the new releases, ensuring your assistant remains up to date.


To get started today building your own Virtual Assistant, you can find dedicated articles, for both C# and TypeScript, detailing the steps you need to take to get up and running within minutes, with end-to-end scripts to configure your assistant and deploy all of the required Azure resources.
We also provide sample implementations for both continuous integration (CI) and continuous deployment (CD) scenarios within Azure DevOps for both C# and TypeScript.


Looking to the future, now that v1.0 is generally available, we are focused on providing the ability for you to take advantage of the other significant capabilities within the Bot Framework eco-system at Build 2020, such as the ability to develop a virtual assistant (and its connected Skills) using Bot Framework Composer. A preview demonstrating Composer integration and the improved developer experience that comes with taking advantage of the new declarative dialog model that underpins it, can be found at


We are excited about hitting this milestone and can’t wait to see the solutions you build, to improve the experiences of your customers!

Version history
Last update:
‎May 21 2020 02:14 AM
Updated by: