Introducing QnA Maker managed: now in public preview
Published Nov 09 2020 02:00 AM 31.2K Views
Microsoft

QnA Maker is an Azure Cognitive Service that allows you to create a conversational layer over your data- in minutes. Today, we are announcing a new version of QnA Maker which advances several core capabilities like better relevance and precise answering, by introducing state-of-art deep learning technologies.

 

Illustrative representation of QnA Maker functionality.Illustrative representation of QnA Maker functionality.

 

Overview of new QnA Maker managed capabilities

Summary of new features introduced:

  1. Deep learnt ranker with enhanced relevance of results across all supported languages.
  2. Precise phrase/short answer extraction from answer passages.
  3. Simplified resource management by reducing the number of resources deployed.
  4. E2E region support for Authoring + Prediction.

Detailed description of the new features is further down in this article. Learn how to migrate to the new QnA Maker managed (Preview) knowledge base here.

QnA Maker managed (Preview) Architecture.

  • As per the architecture of QnA Maker managed (Preview), there will be only two resources: QnA Maker service for authoring and computation and Azure Cognitive Search for storage and L1 ranking. This has been done with an aim of simplifying the resource creation and management process. Now, customers need to manage only 2 resources instead of 5 different resources.  
  • QnA Maker managed (Preview) also allows the user to do language setting specific to Knowledge Base.
  • Computation has been moved out of the user subscription, so there is no dependency on the customers for scaling and availability management. This allowed us to use SOTA deep learnt model for L2 ranker which enhances the L2 ranker horizontally across all the languages, so now we support all the 50+ languages with better and enhanced precision.
  •  QnA Maker service will be available in multiple regions to give customers’ the flexibility to keep their end-to-end service in one region.
  • For inference logs and telemetry, the latest version will be using Azure Monitoring instead of App insights. To keep the experience seamless and easy to adopt all the APIs has been kept backward compatible. There is almost zero change in the management portal experience. 

    

nerajput_1-1604339027855.png

New features of QnA Maker managed (Preview)

This section talks about all the distinguishing features of QnA maker managed in detail.

Simplified Create Blade

Onboarding on QnA Maker managed (Preview), and resource creation has been kept quite simple. Now, you will see a checkbox with Managed, as shown below. As soon as you select the checkbox, the form will be updated with the required resources.

             

nerajput_1-1604339332630.png

Precise Answering

Machine Reading Comprehension based answer span detection feature is most beneficial for the scenarios where the customers have big passages present as answer in their Knowledge Base. Currently, they put good amount of manual efforts in curating small/precise answers and ingest them in the Knowledge base.

The new features give them flexibility to either choose the precise answer or the answer passage, customers can take this decision based on the confidence score of the precise short answer and answer passage. Here are some examples to show how short answers can be useful:

 

nerajput_0-1604339230181.png

Deep Learnt ranker

The new L2 ranker is based on Turing multilingual language model (T-ULRv2), a deep learning-based transformer model, which improves the precision of the service for all the languages. For any user query, the new L2 ranker model understands the semantics of the user query better and gives better aligned results. This model is not language specific and is targeted to improve the overall precision of all languages horizontally. Here are some examples to analyze the difference between the results of current service and QnA Maker managed (Preview) service:

 

 

Query

 

Current GA results

 

QnA Maker managed (Preview) results

 

  Improvements in Preview

can someone ring me

I can tell you all about Wi-Fi calling, including the devices that support Wi-Fi calling and where you can get more information yourself. Feel free to ask me a question and I'll do what I can to answer it

Yes, you can make and receive calls using Wi-Fi calling. Pretty nifty, right?

The new L2 ranker can understand the relevance between “ring me” and “make and receive calls” and is returning more relevant result unlike the current GA, which has returned a generic answer.

can’t connect to mobile data

You'll be connected to Wi-Fi, so it'll only use your minutes and text allowances.

If you don't have mobile signal, it's no problem. With Three inTouch Wi-Fi Calling, you can call and text whenever you're on Wi-Fi in the UK, even without mobile signal.

The new L2 ranker is again able to understand the query better as its able to understand that mobile data is somewhere connected to mobile signals and hence giving better results based on the data present in the Knowledge Base than the current GA model.

 

E2E region support

With QnA Maker managed (Preview) our management service is no more limited to west-US region. We are offering end to end region support for:

  • South Central US
  • North Europe
  • Australia East.

Other hero regions will be added when we go GA.

Knowledge Base specific language setting

Now, customers can create Knowledge bases with different language setting within a service. This feature is beneficial for users who have multi-language scenarios and need to power the service for more than one language. In this case, there will be a test index specific to every Knowledge Base, so that the customer can verify how the service is performing specific to every language.

You can configure this setting only with the first Knowledge base of the service, once set the user will not be allowed to update the setting.

Pricing

Public preview of QnA Maker managed (Preview) will be free in all the regions (You only pay for the Azure Cognitive Search SKU). The standard pricing will be applicable when the service goes to GA by mid-2021.

 References

19 Comments
Microsoft

Congrats Team. Loved the deployment model without compromising data residency requirements.

Brass Contributor

This is amazing ! 

Copper Contributor

It's wonderful! I can deploy very quickly and manage resources very easily! Thanks Team!

Copper Contributor

You totally made my day. Just started a chatbot for a customer in need of three languages in one bot. This will help a lot. I already liked the fact that I don't have to manually change key's to link the qna service to another search service :flexed_biceps:

I do hope that product marketing keeps the price in line with the 'old' approach

Brass Contributor

Looks great!  We are trying to propose QnA maker in a customer's environment to help index a lot (hundreds) of documents.  However, in our testing, QnA maker is having problems reading (ingesting) many of the pdf documents.  Is this a common issue?  Should we use Azure Search instead?

Microsoft

@HesselW Glad that you liked our new offering. We are still working on the pricing and yes it will mostly be aligned to the current pricing model or even slightly simpler. 

 

@PeytonMcM Thanks a lot. May be the issue is with the kind of structure/formatting those PDF files contains as our extraction currently works on the formatting structure of the semi-structured files. Could you please drop us a mail at qnamakerteam@microsoft.com so that we can debug this, we will be happy to help.  

Copper Contributor

Whom can I reach out to, for questions and bugs?

Copper Contributor

Question for the team. I did migrate some bots in production to the preview. Works as a charm and takes away a lot of resources from the list.

I did notice that the preview adds a test index to each database. So for each knowledge base you add, two indexes are used used in de search service (in stead of using one testbk index in the old situation)  Is this by design? And if so, what is the rationel behind it. This does have a very negative effect on operational costs. 

Copper Contributor

I agree with @HesselW this is really problematic to waste one index for test per KB. If I were to create 50 actual KB's I would have to take a higher tier Search just for Text Indexes. Please think about it. I am already running QnA in 5 different languages and many more to be added next year.

@nerajput Is it possible to bring this in GA sooner in Q1. It would save a huge lot of deal with managing several resources. Additionally West Europe region is missing in Preview, can it be added?

Just so you know I have adopted QnA Maker since it was in Preview in early 2018.

Copper Contributor

how  to remove short answer after publishing qna maker KB?

Microsoft

@aowens-jmt Please feel free to drop us a mail at qnamakersupport@microsoft.com

Microsoft

@HesselW and @amitc2021 Its great to hear that you are liking the new version of QnA Maker. We use two indexes per KB, only when you make the language setting KB specific instead of service specific, which is required to create your testing experience and relevance scores same as what you will see once published. In case you are creating KBs belonging to only one language in a service, then please don't use this setting when you are creating your first KB as this setting is allowed to be set only at the time of first KB creation once set you cannot update this setting. Please check: Language support - QnA Maker - Azure Cognitive Services | Microsoft Docs

Copper Contributor

@nerajput 

Thanks. This would definitely help. I will give it a go.

Copper Contributor

@nerajput I've sent a couple of emails to the team and have yet to hear. Wanted to make sure I was sending it to the proper email address. Thank you.

Copper Contributor

Hello,

 

When is available  for ( west Europe ) Portugal ?

Copper Contributor

Any response to @manishagole96's question "how  to remove short answer after publishing qna maker KB?" I am facing the same issue!

Copper Contributor

@nerajput would it be possible to add the ability to toggle displaying of the short answer, possibly via an Application Setting in the bot's App Service? Showing both short and long answers would be extremely confusing for end users.

Copper Contributor
Copper Contributor

question wrt performance. My chatbots using qnamaker all suffer from the same issue. When the bots are not used for a while (couple of hours) the qnamaker server seems to idle. The first answer of the service take at least 10 seconds. All calls after take less than a second. Is there some kind of Allways on for the QnaService?

Version history
Last update:
‎Nov 18 2020 09:55 PM
Updated by: