<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>All Azure AI posts</title>
    <link>https://techcommunity.microsoft.com/t5/azure-ai/bg-p/AzureAIBlog</link>
    <description>All Azure AI posts</description>
    <pubDate>Fri, 23 Apr 2021 17:25:50 GMT</pubDate>
    <dc:creator>AzureAIBlog</dc:creator>
    <dc:date>2021-04-23T17:25:50Z</dc:date>
    <item>
      <title>Localize your website with Microsoft Translator</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003</link>
      <description>&lt;H1&gt;Web Localization and Ecommerce&lt;/H1&gt;
&lt;P&gt;Using Microsoft Azure Translator service, you can localize your website in a cost-effective way. With the advent of the internet, the world has become a much smaller place. Loads of information are stored and transmitted between cultures and countries, giving us all the ability to learn and grow from each other. Powered by advanced deep learning, Microsoft Azure Translator delivers fast and high-quality neural machine-based language translations, empowering you to break through language barriers and take advantage of all these powerful vehicles of knowledge and data transfer.&lt;/P&gt;
&lt;P&gt;Research shows that 40% of internet users will never buy from websites in a foreign language[1]. Machine translation from Azure, supporting over &lt;A href="https://www.microsoft.com/en-us/translator/business/languages/" target="_blank" rel="noopener"&gt;90 languages and dialects&lt;/A&gt;, helps you go to market faster and reach buyers in their native languages by localizing your web assets: from your marketing pages to user-generated content, and everything in-between.&lt;/P&gt;
&lt;P&gt;Up to 95% of the online content that companies generate is available in only one language. This is because localizing websites, especially beyond the home page, is cost prohibitive outside of the top few markets. As a result, localized content seldom extends one or two clicks beyond a home page. However, with machine translation from Azure Translator Service, content that wouldn’t otherwise be localized can be, and now most of your content can reach customers and partners worldwide.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;How to localize your website in a cost-effective way?&lt;/H1&gt;
&lt;P&gt;The first step is to understand the nature of your website content and classify them. It is critical as each of them needs different levels of localization. There are four types of content: a) static and dynamic, b) generated by you and posted by customer, c) sensitive like ‘Terms of Use’, d) part of UX elements.&lt;/P&gt;
&lt;P&gt;Static content like about the organization, product or service description, user guides, terms of use, etc. can be translated once (or less frequently) offline into all required target languages.&amp;nbsp; Translation results could be cached and served from your webserver. &amp;nbsp;This could substantially reduce the cost of translation.&amp;nbsp; Machine translation models which powers Azure Translator service are regularly updated to improve quality. Hence consider refreshing the translations once a quarter if not every month.&lt;/P&gt;
&lt;P&gt;User generated content like customer reviews, information requests, etc. are dynamic in nature, not all of them requiring translations, and to be translated on need basis only. You could plan for an UX element in the webpage which could initiate translation on need basis. Target language for translation could be identified based on user browser language. Likewise, responses to customer could be translated back into the language of original request or comment.&lt;/P&gt;
&lt;P&gt;Sensitive content like terms of use, company policies, are recommended to do a human review post-machine translation.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Text in UX elements of the webpage like menu, labels in forms, etc. are typically one or two words and have restricted space.&amp;nbsp; Hence recommended to do a UX testing post translation for fit and finish.&amp;nbsp; If necessitates look for alternate translation or human review.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Localization.png" style="width: 687px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274754iD39DDD8C6164BB4E/image-dimensions/687x374?v=v2" width="687" height="374" role="button" title="Localization.png" alt="Localization.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Due to the speed and cost-effective nature that Azure Translator Service provides, you can easily test which localization option is optimal for your business and your users. For example, you may only have the budget to localize in dozens of languages and measure customer traffic in multiple markets in parallel. Using your existing web analytics, you will be able to decide where to invest in human translation in terms of markets, languages, or pages. For example, if the machine translated information passes a defined page view threshold, your system may trigger a human review of that content. In addition, you will still be able to maintain machine translation for other areas, to maintain reach.&lt;/P&gt;
&lt;P&gt;By combining pure machine translation and paid translation resources, you can select different quality levels for the translations based on your business needs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;How to use Azure Translator service to translate static content&lt;/H1&gt;
&lt;P&gt;Pre-requisite:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an &lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;Azure subscription&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Once you have an Azure subscription,&amp;nbsp;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation" target="_blank" rel="noopener"&gt;create a Translator resource&lt;/A&gt;&amp;nbsp;in the Azure portal.&lt;/LI&gt;
&lt;LI&gt;Once Translator resource it created, go to the resource, and select&amp;nbsp;‘Keys and Endpoint’ which is used to connect your application to the Translator service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_2-1619114278286.png" style="width: 394px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274756iF3BCD1903DCE12C5/image-dimensions/394x503?v=v2" width="394" height="503" role="button" title="Krishna_Doss_2-1619114278286.png" alt="Krishna_Doss_2-1619114278286.png" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_3-1619114278302.png" style="width: 495px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274755iDEB45D2A41247B6B/image-dimensions/495x300?v=v2" width="495" height="300" role="button" title="Krishna_Doss_3-1619114278302.png" alt="Krishna_Doss_3-1619114278302.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;U&gt;Translating static webpage content&lt;/U&gt;:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Below code sample shows how to translate an element in the webpage.&amp;nbsp; You could use it and iterate for each element in your webpage requiring translation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import os, requests, uuid, json
subscription_key = "YOUR_SUBSCRIPTION_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com"
path = '/translate'
constructed_url = endpoint + path

params = {
    'api-version': '3.0',
    'to': ['de'], # target language
    'textType': 'html' 
}

headers = {
    'Ocp-Apim-Subscription-Key': subscription_key,
    'Content-type': 'application/json',
    'X-ClientTraceId': str(uuid.uuid4())
}

# You can pass more than one object in body.
body = [{
    "text": "&amp;lt;p&amp;gt;The samples on this page use hard-coded keys and endpoints for simplicity. \
    Remember to &amp;lt;strong&amp;gt;remove the key from your code when you're done&amp;lt;/strong&amp;gt;, and \
    &amp;lt;strong&amp;gt;never post it publicly&amp;lt;/strong&amp;gt;. For production, consider using a secure way of \
    storing and accessing your credentials. See the Cognitive Services security article \
    for more information.&amp;lt;/p&amp;gt;"
}]

request = requests.post(constructed_url, params=params, headers=headers, json=body)
response = request.json()
print (response[0]['translations'][0]['text']) # shows how to access the translated text from response&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Localization is just a fraction of the things that you can do with Translator, so don't let the learning stop here. Check out recent new Translator features, additional doc links to dive deeper, and join the Translator Ask Microsoft Anything session on 4/27.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;U&gt;Get started&lt;/U&gt;:&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sign up for &lt;A href="https://azure.microsoft.com/en-us/free/cognitive-services/" target="_blank" rel="noopener"&gt;Azure trial&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Join Translator engineering team on &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/4-27-21-translator-within-azure-cognitive-services-ama/m-p/2275137" target="_blank" rel="noopener"&gt;Ask Microsoft Anything on 4/27&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn about &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185" target="_blank" rel="noopener"&gt;Document Translation (Preview)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/create-language-translator-mixed-reality-application-unity-azure-cognitive-services/" target="_blank" rel="noopener"&gt;Create a language translator application with Unity and Azure Cognitive Services&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/overview" target="_blank" rel="noopener"&gt;Translator documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&amp;nbsp; CSA Research – Can’t Read, Won’t Buy – B2C Analyzing Consumer Language Preferences and Behaviors in 29 Countries &lt;A href="https://insights.csa-research.com/reportaction/305013126/Marketing" target="_blank" rel="noopener"&gt;https://insights.csa-research.com/reportaction/305013126/Marketing&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Apr 2021 23:57:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003</guid>
      <dc:creator>Krishna_Doss</dc:creator>
      <dc:date>2021-04-22T23:57:02Z</dc:date>
    </item>
    <item>
      <title>Big data preparation in Azure Machine Learning – powered by Azure Synapse Analytics</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671</link>
      <description>&lt;P&gt;Many customers who embark on a machine learning journey deal with big data, and need the power of distributed data processing engines to prepare their data for ML. By offering Apache Spark® (powered by Azure Synapse Analytics) in Azure Machine Learning (Azure ML), we are empowering customers to work on their end-to-end ML lifecycle including large-scale data preparation, featurization, model training, and deployment within Azure ML workspace without the need to switching between multiple tools for data preparation and model training. &lt;SPAN&gt;The ability to build the full ML lifecycle&lt;/SPAN&gt; within Azure ML will reduce the time required for customers to iterate on a machine learning project which typically includes multiple rounds of data preparation and training.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With the preview of managed Apache Spark in Azure ML, customers can use Azure ML notebooks to connect to Spark pools in Azure Synapse Analytics, to do interactive data preparation using&amp;nbsp;PySpark. Customers have the&amp;nbsp;option to configure&amp;nbsp;Spark sessions to quickly experiment and iterate on the data. Once ready, they can leverage Azure ML pipelines to automate their end-to-end ML workflow from data preparation to model deployment all in one environment, &lt;/SPAN&gt;&lt;SPAN&gt;while maintaining their data and model lineage. Customers who prefer to train in the Spark environment can choose to install relevant libraries such as Spark MLlib, MMLSpark, etc. to complete their training on Spark pools.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Customers in preview will be able to benefit from the following key capabilities:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reuse Spark pools from Azure Synapse workspace in Azure ML &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Customers can leverage existing Spark pools from Azure Synapse Analytics (Azure Synapse) in Azure ML by just linking their Azure ML and Synapse workspaces via the Azure ML Studio, the Python SDK, or the ARM template. Customers just need to follow the widget in UI or leverage a few lines of code as described in the documentation &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-link-synapse-ml-workspaces" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273990iC3748239EEF8CA15/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Once the workspaces are linked, customers can &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#attach-synapse-spark-pool-as-a-compute" target="_blank" rel="noopener"&gt;attach existing Spark pools&lt;/A&gt; into Azure ML workspace and can also register the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-access-data#supported-data-storage-service-types" target="_blank" rel="noopener"&gt;supported linked services (data store sources)&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273991i887D228F5BCC4424/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Perform interactive data preparation via Spark magic from Azure ML notebooks &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Customers can use Azure ML notebooks to start Spark sessions in PySpark via Spark Magic on attached Spark pools. Customers can register Azure ML datasets to load data from storage of choice. For data in Gen1 and Gen2, customers can use their own identities to authenticate access to data by leveraging AML datasets. The attached Spark pools can be used normally in Azure ML experiments, pipelines, and designer. More information on &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#launch-synapse-spark-pool-for-data-preparation-tasks" target="_blank" rel="noopener"&gt;leveraging Spark Magic for data preparation on AML notebooks here&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273992i5889A7B6E2B1AC1A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Productionize via Azure ML pipelines to orchestrate E2E ML steps including data preparation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;After completing the interactive data preparation, customers can leverage Azure ML pipelines to automate data preparation on Apache Spark runtime as a step in the overall machine learning workflow. Customers can use the SynapseSparkStep for data preparation and choose either TabularDataset&amp;nbsp;or FileDataset as input. Customers can also set up HDFSOutputDatasetConfig to generate the sparkstep output as a FileDataset, to be consumed by the following AzureML pipeline step. More details on &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-synapsesparkstep#use-the-synapsesparkstep-in-a-pipeline" target="_blank" rel="noopener"&gt;How to use Apache Spark (powered by Azure Synapse) in your machine learning pipeline here&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Get started with big data preparation in Azure ML via Apache Spark powered by Azure Synapse&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Get started by visiting our&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#launch-synapse-spark-pool-for-data-preparation-tasks" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&amp;nbsp;and let us know your thoughts. We are committed to making the data preparation experience in Azure ML better for you!&lt;/P&gt;
&lt;P&gt;Learn more about the&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;get started with a free trial&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool" target="_blank" rel="noopener"&gt;Learn more about Azure Synapse big data preparation experience in Azure ML&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-synapsesparkstep" target="_blank" rel="noopener"&gt;Learn more about how to use Apache Spark in your machine learning pipelines&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more about &lt;A href="https://spark.apache.org/" target="_blank" rel="noopener"&gt;Apache Spark&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more about &lt;A href="https://azure.microsoft.com/en-us/services/synapse-analytics/" target="_blank" rel="noopener"&gt;Azure Synapse Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 20 Apr 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671</guid>
      <dc:creator>Xun_Wang</dc:creator>
      <dc:date>2021-04-20T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing Multivariate Anomaly Detection</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/bc-p/2272982#M208</link>
      <description>&lt;P&gt;Great article. I just had a couple of questions?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) I see a preprocessing block. What kind of data preprocessing does it support? Also, does it support preprocessing during training as well as inference?&lt;/P&gt;&lt;P&gt;2) After training a model what metric does the service provide for verifying the accuracy?&lt;/P&gt;&lt;P&gt;3) Do you have limitations on the number of sensors or timestamps supported? Do you have any metric on the latency of training or inference?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Apr 2021 17:51:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/bc-p/2272982#M208</guid>
      <dc:creator>Acash</dc:creator>
      <dc:date>2021-04-14T17:51:32Z</dc:date>
    </item>
    <item>
      <title>Analyzing COVID Medical Papers with Azure and Text Analytics for Health</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890</link>
      <description>&lt;H2&gt;Automatic Paper Analysis&lt;/H2&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Automatic&amp;nbsp;scientific&amp;nbsp;paper&amp;nbsp;analysis&amp;nbsp;is&amp;nbsp;fast&amp;nbsp;growing&amp;nbsp;area&amp;nbsp;of&amp;nbsp;studies,&amp;nbsp;and&amp;nbsp;due&amp;nbsp;to&amp;nbsp;recent&amp;nbsp;improvements&amp;nbsp;in&amp;nbsp;NLP&amp;nbsp;techniques&amp;nbsp;is&amp;nbsp;has&amp;nbsp;been&amp;nbsp;greatly&amp;nbsp;improved&amp;nbsp;in&amp;nbsp;the&amp;nbsp;recent&amp;nbsp;years.&amp;nbsp;In&amp;nbsp;this&amp;nbsp;post,&amp;nbsp;we&amp;nbsp;will&amp;nbsp;show&amp;nbsp;you&amp;nbsp;how&amp;nbsp;to&amp;nbsp;derive&amp;nbsp;specific&amp;nbsp;insights&amp;nbsp;from&amp;nbsp;COVID&amp;nbsp;papers,&amp;nbsp;such&amp;nbsp;as&amp;nbsp;changes&amp;nbsp;in&amp;nbsp;medical&amp;nbsp;treatment&amp;nbsp;over&amp;nbsp;time,&amp;nbsp;or&amp;nbsp;joint&amp;nbsp;treatment&amp;nbsp;strategies&amp;nbsp;using&amp;nbsp;several&amp;nbsp;medications:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618308829590.png" style="width: 625px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272390i95B16A643F02EDE5/image-dimensions/625x200?v=v2" width="625" height="200" role="button" title="shwars_1-1618308829590.png" alt="shwars_1-1618308829590.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;main&amp;nbsp;idea&amp;nbsp;the&amp;nbsp;approach&amp;nbsp;I&amp;nbsp;will&amp;nbsp;describe&amp;nbsp;in&amp;nbsp;this&amp;nbsp;post&amp;nbsp;is&amp;nbsp;to&amp;nbsp;extract&amp;nbsp;as&amp;nbsp;much&amp;nbsp;semi-structured&amp;nbsp;information&amp;nbsp;from&amp;nbsp;text&amp;nbsp;as&amp;nbsp;possible,&amp;nbsp;and&amp;nbsp;then&amp;nbsp;store&amp;nbsp;it&amp;nbsp;into&amp;nbsp;some&amp;nbsp;NoSQL&amp;nbsp;database&amp;nbsp;for&amp;nbsp;further&amp;nbsp;processing.&amp;nbsp;Storing&amp;nbsp;information&amp;nbsp;in&amp;nbsp;the&amp;nbsp;database&amp;nbsp;would&amp;nbsp;allow&amp;nbsp;us&amp;nbsp;to&amp;nbsp;make&amp;nbsp;some&amp;nbsp;very&amp;nbsp;specific&amp;nbsp;queries&amp;nbsp;to&amp;nbsp;answer&amp;nbsp;some&amp;nbsp;of&amp;nbsp;the&amp;nbsp;questions,&amp;nbsp;as&amp;nbsp;well&amp;nbsp;as&amp;nbsp;to&amp;nbsp;provide&amp;nbsp;visual&amp;nbsp;exploration&amp;nbsp;tool&amp;nbsp;for&amp;nbsp;medical&amp;nbsp;expert&amp;nbsp;for&amp;nbsp;structured&amp;nbsp;search&amp;nbsp;and&amp;nbsp;insight&amp;nbsp;generation.&amp;nbsp;The&amp;nbsp;overall&amp;nbsp;architecture&amp;nbsp;of&amp;nbsp;the&amp;nbsp;proposed&amp;nbsp;system&amp;nbsp;is&amp;nbsp;shown&amp;nbsp;below:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ta-diagram.png" style="width: 645px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272392iA187E1F819B9C347/image-dimensions/645x142?v=v2" width="645" height="142" role="button" title="ta-diagram.png" alt="ta-diagram.png" /&gt;&lt;/span&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;We will use different Azure technologies to gain insights into the paper corpus, such as&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics for Health&lt;/A&gt;&lt;SPAN&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cosmos-db/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;CosmosDB&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI&lt;/A&gt;&lt;SPAN&gt;. Now let’s focus on individual parts of this diagram and discuss them in detail.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;BLOCKQUOTE&gt;If you want to experiment with text analytics yourself - you will need an Azure Account. You can always get&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;free trial&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;if you do not have one. And you may also want to check out&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;other AI technologies for developers&lt;/A&gt;&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="covid-scientific-papers-and-cord-dataset"&gt;COVID Scientific Papers and CORD Dataset&lt;/H2&gt;
&lt;P&gt;The idea to apply&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding"&gt;NLP&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;methods to scientific literature seems quite natural. First of all, scientific texts are already well-structured, they contain things like keywords, abstract, as well as well-defined terms. Thus, at the very beginning of COVID pandemic, a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge" target="_blank" rel="noopener"&gt;research challenge has been launched on Kaggle&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to analyze scientific papers on the subject. The dataset behind this competition is called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.semanticscholar.org/cord19" target="_blank" rel="noopener"&gt;CORD&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(&lt;A href="https://arxiv.org/pdf/2004.10706.pdf" target="_blank" rel="noopener"&gt;publication&lt;/A&gt;), and it contains constantly updated corpus of everything that is published on topics related to COVID. Currently, it contains more than 400000 scientific papers, about half of them - with full text.&lt;/P&gt;
&lt;P&gt;This dataset consists of the following parts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Metadata file&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge?select=metadata.csv" target="_blank" rel="noopener"&gt;Metadata.csv&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;contains most important information for all publications in one place. Each paper in this table has unique identifier&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cord_uid&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which in fact does not happen to be completely unique, once you actually start working with the dataset). The information includes:
&lt;UL&gt;
&lt;LI&gt;Title of publication&lt;/LI&gt;
&lt;LI&gt;Journal&lt;/LI&gt;
&lt;LI&gt;Authors&lt;/LI&gt;
&lt;LI&gt;Abstract&lt;/LI&gt;
&lt;LI&gt;Data of publication&lt;/LI&gt;
&lt;LI&gt;doi&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Full-text papers&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;document_parses&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directory, than contain structured text in JSON format, which greatly simplifies the analysis.&lt;/LI&gt;
&lt;LI&gt;Pre-built&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Document Embeddings&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;that maps&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cord_uid&lt;/CODE&gt;s to float vectors that reflect some overall semantics of the paper.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In this post, we will focus on paper abstracts, because they contain the most important information from the paper. However, for full analysis of the dataset, it definitely makes sense to use the same approach on full texts as well.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="what-ai-can-do-with-text"&gt;What AI Can Do with Text?&lt;/H2&gt;
&lt;P&gt;In the recent years, there has been a huge progress in the field of Natural Language Processing, and very powerful neural network language models have been trained. In the area of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding"&gt;NLP&lt;/ABBR&gt;, the following tasks are typically considered:&lt;/P&gt;
&lt;DL&gt;
&lt;DT&gt;Text classification / intent recognition&lt;/DT&gt;
&lt;DD&gt;In this task, we need to classify a piece of text into a number of categories. This is a typical classification task. Sentiment Analysis&lt;/DD&gt;
&lt;DD&gt;We need to return a number that shows how positive or negative the text is. This is a typical regression task. Named Entity Recognition (&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;)&lt;/DD&gt;
&lt;DD&gt;In&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;, we need to extract named entities from text, and determine their type. For example, we may be looking for names of medicines, or diagnoses. Another task similar to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;keyword extraction&lt;/STRONG&gt;.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Text summarization&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;Here we want to be able to produce a short version of the original text, or to select the most important pieces of text.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Question Answering&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;In this task, we are given a piece of text and a question, and our goal is to find the exact answer to this question from text.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Open-Domain Question Answering (&lt;ABBR title="Open Domain Question Answering"&gt;ODQA&lt;/ABBR&gt;)&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;The main difference from previous task is that we are given a large corpus of text, and we need to find the answer to our question somewhere in the whole corpus.&lt;/DD&gt;
&lt;/DL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;In&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://soshnikov.com/azure/deep-pavlov-answers-covid-questions/" target="_blank" rel="noopener"&gt;one of my previous posts&lt;/A&gt;, I have described how we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Open Domain Question Answering"&gt;ODQA&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;approach to automatically find answers to specific COVID questions. However, this approach is not suitable for serious research.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;To make some insights from text,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;seems to be the most prominent technique to use. If we can understand specific entities that are present in text, we could then perform semantically rich search in text that answers specific questions, as well as obtain data on co-occurrence of different entities, figuring out specific scenarios that interest us.&lt;/P&gt;
&lt;P&gt;To train&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;model, as well as any other neural language model, we need a reasonably large dataset that is properly marked up. Finding those datasets is often not an easy task, and producing them for new problem domain often requires initial human effort to mark up the data.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="pre-trained-language-models"&gt;Pre-Trained Language Models&lt;/H2&gt;
&lt;P&gt;Luckily, modern&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)" target="_blank" rel="noopener"&gt;transformer language models&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can be trained in semi-supervised manner using transfer learning. First, the base language model (for example,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270" target="_blank" rel="noopener"&gt;&lt;ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model"&gt;BERT&lt;/ABBR&gt;&lt;/A&gt;) is trained on a large corpus of text first, and then can be specialized to a specific task such as classification or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on a smaller dataset.&lt;/P&gt;
&lt;P&gt;This transfer learning process can also contain additional step - further training of generic pre-trained model on a domain-specific dataset. For example, in the area of medical science Microsoft Research has pre-trained a model called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract" target="_blank" rel="noopener"&gt;PubMedBERT&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(&lt;A href="https://arxiv.org/abs/2007.15779" target="_blank" rel="noopener"&gt;publication&lt;/A&gt;), using texts from PubMed repository. This model can then be further adopted to different specific tasks, provided we have some specialized datasets available.&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pubmedbert.png" style="width: 470px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272398i48A36098BF831B6F/image-dimensions/470x352?v=v2" width="470" height="352" role="button" title="pubmedbert.png" alt="pubmedbert.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2 id="text-analytics-cognitive-services"&gt;Text Analytics Cognitive Services&lt;/H2&gt;
&lt;P&gt;However, training a model requires a lot of skills and computational power, in addition to a dataset. Microsoft (as well as some other large cloud vendors) also makes some pre-trained models available through the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Application Programming Interface"&gt;API&lt;/ABBR&gt;. Those services are called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cognitive Services&lt;/A&gt;, and one of those services for working with text is called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/text-analytics/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt;. It can do the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Keyword extraction&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for some common entity types, such as people, organizations, dates/times, etc.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sentiment analysis&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Language Detection&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Entity Linking&lt;/STRONG&gt;, by automatically adding internet links to some most common entities. This also performs&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;disambiguation&lt;/STRONG&gt;, for example&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;Mars&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can refer to both the planet or a chocolate bar, and correct link would be used depending on the context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For example, let’s have a look at one medical paper abstract analyzed by Text Analytics:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_0-1618309756290.png" style="width: 598px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272399iDC0B993F45A291BE/image-dimensions/598x136?v=v2" width="598" height="136" role="button" title="shwars_0-1618309756290.png" alt="shwars_0-1618309756290.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can see, some specific entities (for example, HCQ, which is short for hydroxychloroquine) are not recognized at all, while others are poorly categorized. Luckily, Microsoft provides special version of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics for Health&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="text-analytics-for-health"&gt;Text Analytics for Health&lt;/H2&gt;
&lt;P&gt;Text Analytics for Health is a cognitive service that exposes pre-trained PubMedBert model with some additional capabilities. Here is the result of extracting entities from the same piece of text using Text Analytics for Health:&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618309813758.png" style="width: 625px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272400iD71522A990F161E2/image-dimensions/625x180?v=v2" width="625" height="180" role="button" title="shwars_1-1618309813758.png" alt="shwars_1-1618309813758.png" /&gt;&lt;/span&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Currently, Text Analytics for Health is available as gated preview, meaning that you need to request access to use it in your specific scenario. This is done according to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/ai/responsible-ai?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Ethical AI&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;principles, to avoid irresponsible usage of this service for cases where human health depends on the result of this service. You can request access&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/csgate" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;To perform analysis, we can use recent version&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/textanalytics/azure-ai-textanalytics/README.md" target="_blank" rel="noopener"&gt;Text Analytics Python SDK&lt;/A&gt;, which we need to pip-install first:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install azure.ai.textanalytics==5.1.0b5&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;We need to specify a version of SDK, because otherwise we can have current non-beta version installed, which lacks Text Analytics for Health functionality.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;The service can analyze a bunch of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;text documents&lt;/STRONG&gt;, up to 10 at a time. You can pass either a list of documents, or dictionary. Provided we have a text of abstract in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;txt&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;variable, we can use the following code to analyze it:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;poller = text_analytics_client.begin_analyze_healthcare_entities([txt])
res = list(poller.result())
print(res)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This results in the following object:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-txt"&gt;[AnalyzeHealthcareEntitiesResultItem(
  id=0, entities=[
     HealthcareEntity(text=2019, category=Time, subcategory=None, length=4, offset=20, confidence_score=0.85, data_sources=None, 
        related_entities={HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}): 'TimeOfCondition'}), 
     HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}), 
     HealthcareEntity(text=COVID-19, category=Diagnosis, subcategory=None, length=8, offset=55, confidence_score=1.0, 
        data_sources=[HealthcareEntityDataSource(entity_id=C5203670, name=UMLS), HealthcareEntityDataSource(entity_id=U07.1, name=ICD10CM), HealthcareEntityDataSource(entity_id=10084268, name=MDR), ...
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;As you can see, in addition to just the list of entities, we also get the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Enity Mapping&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of entities to standard medical ontologies, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.nlm.nih.gov/research/umls/index.html" target="_blank" rel="noopener"&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Relations&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;between entities inside the text, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;TimeOfCondition&lt;/CODE&gt;, etc.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Negation&lt;/STRONG&gt;, which indicated that an entity was used in negative context, for example&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;COVID-19 diagnosis did not occur&lt;/EM&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_2-1618309813783.png" style="width: 565px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272401iCED22C0254BFADF5/image-dimensions/565x195?v=v2" width="565" height="195" role="button" title="shwars_2-1618309813783.png" alt="shwars_2-1618309813783.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to using Python SDK, you can also call Text Analytics using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Application Programming Interface"&gt;API&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directly. This is useful if you are using a programming language that does not have a corresponding SDK, or if you prefer to receive Text Analytics result in the JSON format for further storage or processing. In Python, this can be easily done using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;requests&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;library:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;uri = f"{endpoint}/text/analytics/v3.1-preview.3/entities/
         health/jobs?model-version=v3.1-preview.4"
headers = { "Ocp-Apim-Subscription-Key" : key }
resp = requests.post(uri,headers=headers,data=doc)
res = resp.json()
if res['status'] == 'succeeded':
    result = t['results']
else:
    result = None&lt;/LI-CODE&gt;
&lt;P&gt;&lt;EM&gt;(We need to make sure to use the preview endpoint to have access to text analytics for health)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Resulting JSON file will look like this:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{"id": "jk62qn0z",
 "entities": [
    {"offset": 24, "length": 28, "text": "coronavirus disease pandemic", 
     "category": "Diagnosis", "confidenceScore": 0.98, 
     "isNegated": false}, 
    {"offset": 54, "length": 8, "text": "COVID-19", 
     "category": "Diagnosis", "confidenceScore": 1.0, "isNegated": false, 
     "links": [
       {"dataSource": "UMLS", "id": "C5203670"}, 
       {"dataSource": "ICD10CM", "id": "U07.1"}, ... ]},
 "relations": [
    {"relationType": "Abbreviation", "bidirectional": true, 
     "source": "#/results/documents/2/entities/6", 
     "target": "#/results/documents/2/entities/7"}, ...],
}
&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;In production, you may want to incorporate some code that will retry the operation when an error is returned by the service. For more guidance on proper implementation of cognitive services&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;clients, you can&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/azure/ai/textanalytics" target="_blank" rel="noopener"&gt;check source code&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of Azure Python SDK, or use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://swagger.io/" target="_blank" rel="noopener"&gt;Swagger&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to generate client code.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="using-cosmosdb-to-store-analysis-result"&gt;Using Cosmos DB to Store Analysis Result&lt;/H2&gt;
&lt;P&gt;Using Python code similar to the one above we can extract JSON entity/relation metadata for each paper abstract. This process takes quite some time for 400K papers, and to speed it up it can be parallelized using technologies such as &lt;A href="https://docs.microsoft.com/azure/batch/?WT.mc_id=aiml-20447-dmitryso" target="_self"&gt;Azure Batch&lt;/A&gt; or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/machine-learning/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;. However, in my first experiment I just run the script on one VM in the cloud, and the data was ready in around 11 hours.&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_3-1618309813793.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272402i1F0BECF51E41517F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_3-1618309813793.png" alt="shwars_3-1618309813793.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Having done this, we have now obtained a collection of papers, each having a number of entities and corresponding relations. This structure is inherently hierarchical, and the best way to store and process it would be to use NoSQL approach for data storage. In Azure,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cosmos-db/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cosmos DB&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is a universal database that can store and query semi-structured data like our JSON collection, thus it would make sense to upload all JSON files to Cosmos DB collection. This can be done using the following code:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;coscli = azure.cosmos.CosmosClient(cosmos_uri, credential=cosmos_key)
cosdb = coscli.get_database_client("CORD")
cospapers = cosdb.get_container_client("Papers")
for x in all_papers_json:
    cospapers.upsert_item(x)&lt;/LI-CODE&gt;
&lt;P&gt;Here,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;all_papers_json&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is a variable (or generator function) containing individual JSON documents for each paper. We also assume that you have created a Cosmos DB database called ‘CORD’, and obtained required credentials into&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cosmos_uri&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cosmos_key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;variables.&lt;/P&gt;
&lt;P&gt;After running this code, we will end up with the container&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;Papers&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;will all metadata. We can now work with this container in Azure Portal by going to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Data Explorer&lt;/STRONG&gt;:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_4-1618309813810.png" style="width: 631px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272405i549D361D9A56057D/image-dimensions/631x284?v=v2" width="631" height="284" role="button" title="shwars_4-1618309813810.png" alt="shwars_4-1618309813810.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cosmos-db/sql-query-getting-started/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cosmos DB SQL&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in order to query our collection. For example, here is how we can obtain the list of all medications found in the corpus:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- unique medication names
SELECT DISTINCT e.text 
FROM papers p 
JOIN e IN p.entities 
WHERE e.category='MedicationName'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;Using SQL, we can formulate some very specific queries. Suppose, a medical specialist wants to find out all proposed dosages of a specific medication (say,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;hydroxychloroquine&lt;/STRONG&gt;), and see all papers that mention those dosages. This can be done using the following query:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- dosage of specific drug with paper titles
SELECT p.title, r.source.text
FROM papers p JOIN r IN p.relations 
WHERE r.relationType='DosageOfMedication' 
AND CONTAINS(r.target.text,'hydro')&lt;/LI-CODE&gt;
&lt;P&gt;You can execute this query interactively in Azure Portal, inside Cosmos DB Data Explorer. The result of the query looks like this:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;[
 {
  "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)",
  "text": "400 mg"
 },{
  "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)",
   "text": "maintenance dose"
    },...]&lt;/LI-CODE&gt;
&lt;P&gt;A more difficult task would be to select all entities together with their corresponding ontology ID. This would be extremely useful, because eventually we want to be able to refer to a specific entity (&lt;EM&gt;hydroxychloroquine&lt;/EM&gt;) regardless or the way it was mentioned in the paper (for example,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;HCQ&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;also refers to the same medication). We will use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;as our main ontology.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--- get entities with UMLS IDs
SELECT e.category, e.text, 
  ARRAY (SELECT VALUE l.id 
         FROM l IN e.links 
         WHERE l.dataSource='UMLS')[0] AS umls_id 
FROM papers p JOIN e IN p.entities&lt;/LI-CODE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="creating-interactive-dashboards"&gt;Creating Interactive Dashboards&lt;/H2&gt;
&lt;P&gt;While being able to use SQL query to obtain an answer to some specific question, like medication dosages, seems like a very useful tool - it is not convenient for non-IT professionals, who do not have high level of SQL mastery. To make the collection of metadata accessible to medical professionals, we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tool to create an interactive dashboard for entity/relation exploration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_5-1618309813826.png" style="width: 597px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272404iF8211DD19E119DA0/image-dimensions/597x533?v=v2" width="597" height="533" role="button" title="shwars_5-1618309813826.png" alt="shwars_5-1618309813826.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the example above, you can see a dashboard of different entities. One can select desired entity type on the left (eg.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Medication Name&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in our case), and observe all entities of this type on the right, together with their count. You can also see associated&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;IDs in the table, and from the example above once can notice that several entities can refer to the same ontology ID (&lt;EM&gt;hydroxychloroquine&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;HCQ&lt;/EM&gt;).&lt;/P&gt;
&lt;P&gt;To make this dashboard, we need to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/desktop/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI Desktop&lt;/A&gt;. First we need to import Cosmos DB data - the tools support direct import of data from Azure.&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_6-1618309813830.png" style="width: 551px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272403iCEB41BA5795F57A2/image-dimensions/551x617?v=v2" width="551" height="617" role="button" title="shwars_6-1618309813830.png" alt="shwars_6-1618309813830.png" /&gt;&lt;/span&gt;
&lt;P&gt;Then we provide SQL query to get all entities with the corresponding&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;IDs - the one we have shown above - and one more query to display all unique categories. Then we drag those two tables to the PowerBI canvas to get the dashboard shown above. The tool automatically understands that two tables are linked by one field named&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;category&lt;/STRONG&gt;, and supports the functionality to filter second table based on the selection in the first one.&lt;/P&gt;
&lt;P&gt;Similarly, we can create a tool to view relations:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_7-1618309813835.png" style="width: 567px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272406iB6EE14130450782C/image-dimensions/567x449?v=v2" width="567" height="449" role="button" title="shwars_7-1618309813835.png" alt="shwars_7-1618309813835.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From this tool, we can make queries similar to the one we have made above in SQL, to determine dosages of a specific medications. To do it, we need to select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;DosageOfMedication&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;relation type in the left table, and then filter the right table by the medication we want. It is also possible to create further drill-down tables to display specific papers that mention selected dosages of medication, making this tool a useful research instrument for medical scientist.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="getting-automatic-insights"&gt;Getting Automatic Insights&lt;/H2&gt;
&lt;P&gt;The most interesting part of the story, however, is to draw some automatic insights from the text, such as the change in medical treatment strategy over time. To do this, we need to write some more code in Python to do proper data analysis. The most convenient way to do that is to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Notebooks embedded into Cosmos DB&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_8-1618309813841.png" style="width: 622px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272408i3CBE604E97DB3587/image-dimensions/622x248?v=v2" width="622" height="248" role="button" title="shwars_8-1618309813841.png" alt="shwars_8-1618309813841.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Those notebooks support embedded SQL queries, thus we are able to execute SQL query, and then get the results into Pandas DataFrame, which is Python-native way to explore data:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;%%sql --database CORD --container Papers --output meds
SELECT e.text, e.isNegated, p.title, p.publish_time,
       ARRAY (SELECT VALUE l.id FROM l 
              IN e.links 
              WHERE l.dataSource='UMLS')[0] AS umls_id 
FROM papers p 
JOIN e IN p.entities
WHERE e.category = 'MedicationName'&lt;/LI-CODE&gt;
&lt;DIV class="language-sql highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;Here we end up with&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;meds&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;DataFrame, containing names of medicines, together with corresponding paper titles and publishing date. We can further group by ontology ID to get frequencies of mentions for different medications:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;unimeds = meds.groupby('umls_id') \
              .agg({'text' : lambda x : ','.join(x), 
                    'title' : 'count', 
                    'isNegated' : 'sum'})
unimeds['negativity'] = unimeds['isNegated'] / unimeds['title']
unimeds['name'] = unimeds['text'] \
                  .apply(lambda x: x if ',' not in x 
                                     else x[:x.find(',')])
unimeds.sort_values('title',ascending=False).drop('text',axis=1)&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;This gives us the following table:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;umls_id&lt;/TH&gt;
&lt;TH&gt;title&lt;/TH&gt;
&lt;TH&gt;isNegated&lt;/TH&gt;
&lt;TH&gt;negativity&lt;/TH&gt;
&lt;TH&gt;name&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;C0020336&lt;/TD&gt;
&lt;TD&gt;4846&lt;/TD&gt;
&lt;TD&gt;191&lt;/TD&gt;
&lt;TD&gt;0.039414&lt;/TD&gt;
&lt;TD&gt;hydroxychloroquine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0008269&lt;/TD&gt;
&lt;TD&gt;1870&lt;/TD&gt;
&lt;TD&gt;38&lt;/TD&gt;
&lt;TD&gt;0.020321&lt;/TD&gt;
&lt;TD&gt;chloroquine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C1609165&lt;/TD&gt;
&lt;TD&gt;1793&lt;/TD&gt;
&lt;TD&gt;94&lt;/TD&gt;
&lt;TD&gt;0.052426&lt;/TD&gt;
&lt;TD&gt;Tocilizumab&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C4726677&lt;/TD&gt;
&lt;TD&gt;1625&lt;/TD&gt;
&lt;TD&gt;24&lt;/TD&gt;
&lt;TD&gt;0.014769&lt;/TD&gt;
&lt;TD&gt;remdesivir&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0052796&lt;/TD&gt;
&lt;TD&gt;1201&lt;/TD&gt;
&lt;TD&gt;84&lt;/TD&gt;
&lt;TD&gt;0.069942&lt;/TD&gt;
&lt;TD&gt;azithromycin&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0067874&lt;/TD&gt;
&lt;TD&gt;1&lt;/TD&gt;
&lt;TD&gt;0&lt;/TD&gt;
&lt;TD&gt;0.000000&lt;/TD&gt;
&lt;TD&gt;1-butanethiol&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From this table, we can select the top-15 most frequently mentioned medications:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;top = { 
    x[0] : x[1]['name'] for i,x in zip(range(15),
      unimeds.sort_values('title',ascending=False).iterrows())
}&lt;/LI-CODE&gt;
&lt;P&gt;To see how frequency of mentions for medications changed over time, we can average out the number of mentions for each month:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;# First, get table with only top medications 
imeds = meds[meds['umls_id'].apply(lambda x: x in top.keys())].copy()
imeds['name'] = imeds['umls_id'].apply(lambda x: top[x])

# Create a computable field with month
imeds['month'] = imeds['publish_time'].astype('datetime64[M]')

# Group by month
medhist = imeds.groupby(['month','name']) \
          .agg({'text' : 'count', 
                'isNegated' : [positive_count,negative_count] })&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&lt;SPAN style="font-family: inherit;"&gt;This gives us the DataFrame that contains number of positive and negative mentions of medications for each month. From there, we can plot corresponding graphs using&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;matplotlib&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;medh = medhist.reset_index()
fig,ax = plt.subplots(5,3)
for i,n in enumerate(top.keys()):
    medh[medh['name']==top[n]] \
    .set_index('month')['isNegated'] \
    .plot(title=top[n],ax=ax[i//3,i%3])
fig.tight_layout()&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_9-1618309813852.png" style="width: 636px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272407iE9F521F29AE64C09/image-dimensions/636x259?v=v2" width="636" height="259" role="button" title="shwars_9-1618309813852.png" alt="shwars_9-1618309813852.png" /&gt;&lt;/span&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="visualizing-terms-co-occurrence"&gt;Visualizing Terms Co-Occurrence&lt;/H2&gt;
&lt;P&gt;Another interesting insight is to observe which terms occur frequently together. To visualize such dependencies, there are two types of diagrams:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sankey diagram&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;allows us to investigate relations between two types of terms, eg. diagnosis and treatment&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Chord diagram&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;helps to visualize co-occurrence of terms of the same type (eg. which medications are mentioned together)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To plot both diagrams, we need to compute&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;co-occurrence matrix&lt;/STRONG&gt;, which in the row&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;i&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and column&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;j&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;contains number of co-occurrences of terms&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;i&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;j&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the same abstract (one can notice that this matrix is symmetric). The way we compute it is to manually select relatively small number of terms for our ontology, grouping some terms together if needed:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;treatment_ontology = {
 'C0042196': ('vaccination',1),
 'C0199176': ('prevention',2),
 'C0042210': ('vaccines',1), ... }

diagnosis_ontology = {
 'C5203670': ('COVID-19',0),
 'C3714514': ('infection',1),
 'C0011065': ('death',2),
 'C0042769': ('viral infections',1),
 'C1175175': ('SARS',3),
 'C0009450': ('infectious disease',1), ...}&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&lt;SPAN style="font-family: inherit;"&gt;Then we define a function to compute co-occurrence matrix for two categories specified by those ontology dictionaries:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;def get_matrix(cat1, cat2):
    d1 = {i:j[1] for i,j in cat1.items()}
    d2 = {i:j[1] for i,j in cat2.items()}
    s1 = set(cat1.keys())
    s2 = set(cat2.keys())
    a = np.zeros((len(cat1),len(cat2)))
    for i in all_papers:
        ent = get_entities(i)
        for j in ent &amp;amp; s1:
            for k in ent &amp;amp; s2 :
                a[d1[j],d2[k]] += 1
    return a&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;Here&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;get_entities&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;function returns the list of&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR style="font-family: inherit;" title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;IDs for all entities mentioned in the paper, and&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;all_papers&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;is the generator that returns the complete list of paper abstracts metadata.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;To actually plot the Sankey diagram, we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://plotly.com/python/" target="_blank" rel="noopener"&gt;Plotly&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;graphics library. This process is well described&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://plotly.com/python/sankey-diagram/" target="_blank" rel="noopener"&gt;here&lt;/A&gt;, so I will not go into further details. Here are the results:&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_10-1618309813867.png" style="width: 657px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272411i9CA3DE07AC0D98A4/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_10-1618309813867.png" alt="shwars_10-1618309813867.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_11-1618309813875.png" style="width: 657px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272410i9BA843A2C8DAEE3A/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_11-1618309813875.png" alt="shwars_11-1618309813875.png" /&gt;&lt;/span&gt;
&lt;P&gt;Plotting a chord diagram cannot be easily done with Plotly, but can be done with a different library -&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://pypi.org/project/chord/" target="_blank" rel="noopener"&gt;Chord&lt;/A&gt;. The main idea remains the same - we build co-occurrence matrix using the same function described above, passing the same ontology twice, and then pass this matrix to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;Chord&lt;/CODE&gt;:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def chord(cat):
    matrix = get_matrix(cat,cat)
    np.fill_diagonal(matrix,0)
    names = cat.keys()
    Chord(matrix.tolist(), names, font_size = "11px").to_html()&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;The results of chord diagrams for treatment types and medications are below:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_12-1618309813883.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272409iF91465DB62F52534/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_12-1618309813883.png" alt="shwars_12-1618309813883.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_13-1618309813895.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272412iF38991534E116039/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_13-1618309813895.png" alt="shwars_13-1618309813895.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Treatment types&lt;/TD&gt;
&lt;TD&gt;Medications&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Diagram on the right shows which medications are mentioned together (in the same abstract). We can see that well-known combinations, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;hydroxychloroquine + azitromycin&lt;/STRONG&gt;, are clearly visible.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="conclusion"&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;In this post, we have described the architecture of a proof-of-concept system for knowledge extraction from large corpora of medical texts. We use Text Analytics for Health to perform the main task of extracting entities and relations from text, and then a number of Azure services together to build a query took for medical scientist and to extract some visual insights. This post is quite conceptual at the moment, and the system can be further improved by providing more detailed drill-down functionality in PowerBI module, as well as doing more data exploration on extracted entity/relation collection. It would also be interesting to switch to processing full-text articles as well, in which case we need to think about slightly different criteria for co-occurrence of terms (eg. in the same paragraph vs. the same paper).&lt;/P&gt;
&lt;P&gt;The same approach can be applied in other scientific areas, but we would need to be prepared to train a custom neural network model to perform entity extraction. This task has been briefly outlined above (when we talked about the use of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model"&gt;BERT&lt;/ABBR&gt;), and I will try to focus on it in one of my next posts. Meanwhile, feel free to reach out to me if you are doing similar research, or have any specific questions on the code and/or methodology.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 13 Apr 2021 19:42:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890</guid>
      <dc:creator>shwars</dc:creator>
      <dc:date>2021-04-13T19:42:11Z</dc:date>
    </item>
    <item>
      <title>Learn about Bot Framework Composer’s new authoring experience and deploy your bot to a telephone</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Customer expectations continue to increase,&amp;nbsp;looking for&amp;nbsp;immediate response and rapid issue resolution, across multiple&amp;nbsp;channels&amp;nbsp;24/7.&amp;nbsp;Nowhere is this more apparent than the contact center, with this&amp;nbsp;landscape&amp;nbsp;is&amp;nbsp;driving the need for&amp;nbsp;efficiencies, such as reducing&amp;nbsp;call&amp;nbsp;handling times&amp;nbsp;and increasing call deflection rates&amp;nbsp;– all whilst aiming to deliver a&amp;nbsp;personalized and tailored&amp;nbsp;customer experience.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To help&amp;nbsp;respond to this need,&amp;nbsp;we announced&amp;nbsp;the public preview of the telephony channel for Azure Bot Service&amp;nbsp;in February 2021,&amp;nbsp;expanding&amp;nbsp;the already significant number of touch points&amp;nbsp;offered by the service, to include&amp;nbsp;this&amp;nbsp;increasingly&amp;nbsp;critical method of communication.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Built on&amp;nbsp;state-of-the-art speech&amp;nbsp;services&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The&amp;nbsp;new telephony channel, combined with our&amp;nbsp;Bot Framework&amp;nbsp;developer&amp;nbsp;platform,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;makes it easy to&amp;nbsp;rapidly&amp;nbsp;build &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;always-available &lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;virtual&amp;nbsp;assistants, or IVR assistants,&amp;nbsp;that provide&amp;nbsp;natural language&amp;nbsp;intent-based call handling&amp;nbsp;and the ability to&amp;nbsp;handle advanced conversation&amp;nbsp;flows, such as context switching&amp;nbsp;and&amp;nbsp;responding to&amp;nbsp;follow up questions&amp;nbsp;and still meeting the&amp;nbsp;goal of&amp;nbsp;reducing operational costs for enterprises.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This new capability&amp;nbsp;combines several of our&amp;nbsp;Azure&amp;nbsp;and AI services, including&amp;nbsp;our &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;state-of-the-art &lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;Cognitive Speech Service,&amp;nbsp;enabling fluid, natural-sounding speech that matches the patterns and intonation of human voices&amp;nbsp;through&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Text-to-Speech neural voices&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;with&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/communication-services/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Communications Services&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;powering&amp;nbsp;various&amp;nbsp;calling&amp;nbsp;capabilities.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The channel also&amp;nbsp;provides&amp;nbsp;support&amp;nbsp;for&amp;nbsp;full duplex conversations&amp;nbsp;and&amp;nbsp;streaming audio over PSTN, support for DTMF,&amp;nbsp;barge-in&amp;nbsp;(allowing a caller to interrupt the virtual&amp;nbsp;assistant)&amp;nbsp;and more.&amp;nbsp;Follow our roadmap and try out one of our samples on the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/botframework-telephony" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Telephony channel GitHub repository&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Improving our Conversational AI SDK and tools for&amp;nbsp;speech experiences&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To&amp;nbsp;compliment the introduction of the telephony channel and ensure our customers can create industry leading experiences, we have&amp;nbsp;added new features to Bot Framework Composer,&amp;nbsp;an&amp;nbsp;open-source&amp;nbsp;conversational&amp;nbsp;authoring&amp;nbsp;tool, featuring a visual canvas,&amp;nbsp;built on top of the Bot Framework SDK,&amp;nbsp;allowing you&amp;nbsp;to extend and customize the conversation with code and pre-built components.&amp;nbsp; Updates to Composer to support speech experiences include,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;The ability to add tailored speech responses&amp;nbsp;in seconds, either for a voice only or multi-modal (text and speech)&amp;nbsp;agent.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Addition of global application settings for your bot, allowing you to set a consistent voice font to be used on speech enabled channels, including taking care of setting the required base SSML tags.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Authoring UI&amp;nbsp;helpers that allow you to&amp;nbsp;add additional&amp;nbsp;common SSML (&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Speech&amp;nbsp;Synthesis&amp;nbsp;Markup Language&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;)&amp;nbsp;tags to control the intonation, speed and even the style of the voice used,&amp;nbsp;including new styles available for our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;neural voice fonts&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;, such as&amp;nbsp;a dedicated Customer Service style.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Comprehensive Contact Center solution through Dynamics 365&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft announced&amp;nbsp;the expansion of Microsoft Dynamics 365 Customer Service omnichannel capabilities to include a new voice channel,&amp;nbsp;that is built on this telephony channel&amp;nbsp;infrastructure.&amp;nbsp;With&amp;nbsp;native&amp;nbsp;voice, businesses receive seamless, end-to-end&amp;nbsp;experiences within a single solution, ensuring consistent, personalized, and connected support across all channels of engagement.&amp;nbsp;This&amp;nbsp;new voice channel for Customer Service enables an all-in-one customer service solution without fragmentation or manual data integration&amp;nbsp;required, and&amp;nbsp;enables a faster time to value.&amp;nbsp;Learn&amp;nbsp;more&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://cloudblogs.microsoft.com/dynamics365/bdm/2020/09/23/new-voice-channel-streamlines-omnichannel-customer-experiences/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;here&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Get started building for telephony!&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Sign up for&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/free/cognitive-services/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Azure trial&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Join&amp;nbsp;us on &lt;A href="https://www.youtube.com/watch?v=kdA6zAnCXzM" target="_self"&gt;live stream of AI Show&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;on 4/16 11AM&amp;nbsp;PDT&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Sign up for&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/bd-p/AzureAIAMA" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;conversational AI Ask Microsoft Anything (4/28)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;To&amp;nbsp;get started&amp;nbsp;developing a virtual agent, that you can surface via the new telephony channel today, download&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/trycomposer" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Bot Framework Composer&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Read more about the telephony channel&amp;nbsp;preview,&amp;nbsp;including&amp;nbsp;documentation and samples, visit the Bot Framework telephony channel&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/botframework-telephony" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;GitHub&amp;nbsp;repository&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 14 Apr 2021 16:41:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739</guid>
      <dc:creator>KelvinChen</dc:creator>
      <dc:date>2021-04-14T16:41:33Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing Multivariate Anomaly Detection</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/bc-p/2269128#M205</link>
      <description>&lt;P&gt;Fantastic!&lt;/P&gt;</description>
      <pubDate>Tue, 13 Apr 2021 00:59:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/bc-p/2269128#M205</guid>
      <dc:creator>Alex Thomas</dc:creator>
      <dc:date>2021-04-13T00:59:44Z</dc:date>
    </item>
    <item>
      <title>Introducing Multivariate Anomaly Detection</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679</link>
      <description>&lt;P&gt;Microsoft partners and customers have been building metrics monitoring solutions for AIOps and predictive maintenance, by leveraging the easy-to-use time-series anomaly detection Cognitive Service: Anomaly Detector. Because of its ability to analyze time-series individually, Anomaly Detector is benefiting the industry with its simplicity and scalability.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;What's new&lt;/H2&gt;
&lt;P&gt;We are pleased to announce the new multi-variate capability of Anomaly Detector. The new multivariate anomaly detection APIs in Anomaly Detector further enable developers to easily integrate advanced AI of detecting anomalies from groups of metrics into their applications without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals are now counted as key factors. The new feature protects your mission-critical systems and physical assets, such as software applications, servers, factory machines, spacecraft, or even your business, from failures with a holistic view.&lt;/P&gt;
&lt;P&gt;Imagine 20 sensors from an auto engine generating 20 different signals, e.g., vibration, temperature, etc. The readings of those signals individually may not tell you much on system-level issues, but together, could represent the health of the engine. When the synergy of those signals turns odd, the multivariate anomaly detection feature can sense the anomaly like a seasoned floor expert. Moreover, the AI models are trained and customized for your data such that it understands your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time-series anomaly detection capabilities as well as the interpretability of the anomalies into predictive maintenance solutions, or AIOps monitoring solutions for complex enterprise software, or business intelligence tools.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Customer love&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Siemens.png" style="width: 197px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270943iF799F4859624796C/image-size/small?v=v2&amp;amp;px=200" role="button" title="Siemens.png" alt="Siemens.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;“Medical device production demands unprecedented precision. For this reason, the Siemens Healthineers team uses Multivariate Anomaly Detector (MVAD) in medical device stress tests during the final inspection in the production. We found MVAD easy to use and work almost out of the box with promising performance. With the ready-to-use model, we don't need to develop a custom AD model, which ensures a short time to market. We plan to expand this technology also to other use cases. It is made easy due to good integration into our ML platform and processes.” - Dr. Jens Fürst, Head Digitalization and Automation at Siemens Healthineers&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Airbus.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270947iB01CC75155D882ED/image-size/small?v=v2&amp;amp;px=200" role="button" title="Airbus.jpg" alt="Airbus.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;To better understand the health and condition of the aircraft and foresee and fix potential problems before they occur, Airbus deployed Anomaly Detector, part of Cognitive Services, to gather and analyze the telemetry data. It began as a proof of concept of the aircraft-monitoring application by loading telemetry data from multiple flights for analysis and model training. “Early tests have shown that for many cases, the out-of-the-box solution works beautifully, which helps us deploy our solutions faster. I would say that we save up to three months on development for our smaller use cases with Anomaly Detector.” &lt;BR /&gt;Marcel Rummens: Product Owner of Internal AI Platform, Airbus&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;AI horsepower&lt;/H2&gt;
&lt;P&gt;Time-series anomaly detection is an important research topic in data mining and has a wide range of applications in the industry. Efficient and accurate anomaly detection helps companies to monitor their key metrics continuously and alert for potential incidents on time. In many real-world applications like predictive maintenance and SpaceOps, multiple time-series metrics are collected to reflect the health status of a system. Univariate time-series anomaly detection algorithms can find anomalies for a single metric. However, it could be problematic in deciding whether the whole system is running normally. For example, sudden changes of a certain metric do not necessarily mean failures of the system. As shown in Figure 1, there are obvious boosts in the volume of TIMESERIES RECEIVED and DATA RECEIVED ON FLINK in the green segment, but the system is still in a healthy state as these two features share a consistent tendency. However, in the red segment, GC shows an inconsistent pattern with other metrics, indicating a problem in garbage collection. Consequently, it is essential to take the correlations between different time series into consideration in a multivariate time-series anomaly detection system.&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="figure1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270976iC877B155C1E6BCA6/image-size/large?v=v2&amp;amp;px=999" role="button" title="figure1.png" alt="Fig.1" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Fig.1&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this newly introduced feature, we productized a novel framework — MTAD-GAT (Multivariate Time-series Anomaly Detection via Graph Attention Network), to tackle the limitations of previous solutions. Our method considers each univariate time-series as an individual feature and tries to model the correlations between different features explicitly, while the temporal dependencies within each time-series are modeled at the same time. The key ingredients in our model are two graph attention layers, namely the feature-oriented graph attention layer and the time-oriented graph attention layer. The feature-oriented graph attention layer captures the causal relationships between multiple features, and the time-oriented graph attention layer underlines the dependencies along the temporal dimension. In addition, we jointly train a forecasting-based model and a reconstruction-based model for better representations of time-series data. The two models can be optimized simultaneously by a joint objective function.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="maga.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270978iFB7524395292661F/image-size/large?v=v2&amp;amp;px=999" role="button" title="maga.png" alt="maga.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;The magic behind the scenes can be summarized as follows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A novel framework to solve the multivariate time-series anomaly detection problem in a self-supervised manner. Our model shows superior performances on two public datasets and establishes state-of-the-art scores in the literature.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;For the first time, we leverage two parallel graph attention (GAT) layers to learn the relationships between different time-series and timestamps dynamically. Especially, our model captures the correlations between different time-series successfully without any prior knowledge.&lt;/LI&gt;
&lt;LI&gt;We integrate the advantages of both forecasting-based and reconstruction-based models by introducing a joint optimization target. The forecasting-based model focuses on single-timestamp prediction, while the reconstruction-based model learns a latent representation of the entire time-series.&lt;/LI&gt;
&lt;LI&gt;Our network has good interpretability. We analyze the attention scores of multiple time-series learned by the graph attention layers, and the results correspond reasonably well to human intuition. We also show its capability of anomaly diagnosis.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Multivariate anomaly detection API overview&lt;/H2&gt;
&lt;P&gt;This new feature has a different workflow compared with the existing univariate feature. There are two phases to obtain the detection results, the training phase, and the inference phase. In the training phase, you need to provide some historical data to let the model learn past patterns. Then in the inference phase, you can call the inference API to acquire detection results of multivariate time-series in a given range.&lt;/P&gt;
&lt;TABLE width="691"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;&lt;STRONG&gt;APIs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;&lt;STRONG&gt;Functionality&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Create and train model using training data&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models/{modelid}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Get model info including training status and parameters used in the model&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models[?$skip][&amp;amp;$top]&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;List models of a subscription&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models/{modelid}/detect&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Submit inference task with user's data, this is async&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/results/{resultid}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Get anomalies + root causes (the contribution scores of each variate for each incident)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models/{modelId}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Delete an existing multivariate model according to the modelId&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models/{modelId}/export&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Export Multivariate Anomaly Detection Model as Zip file&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started!&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/" target="_blank" rel="noopener"&gt;Learning more from our documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;QuickStarts: &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158805" target="_blank" rel="noopener"&gt;C#,&lt;/A&gt; &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158900" target="_blank" rel="noopener"&gt;Python&lt;/A&gt;, &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158901" target="_blank" rel="noopener"&gt;JavaScript&lt;/A&gt;, &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158901" target="_blank" rel="noopener"&gt;Java&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;Artificial Intelligence for developers&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Apr 2021 15:11:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679</guid>
      <dc:creator>Tony_Xing</dc:creator>
      <dc:date>2021-04-12T15:11:56Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2263209#M204</link>
      <description>&lt;P&gt;question wrt performance. My chatbots using qnamaker all suffer from the same issue. When the bots are not used for a while (couple of hours) the qnamaker server seems to idle. The first answer of the service take at least 10 seconds. All calls after take less than a second. Is there some kind of Allways on for the QnaService?&lt;/P&gt;</description>
      <pubDate>Fri, 09 Apr 2021 06:57:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2263209#M204</guid>
      <dc:creator>HesselW</dc:creator>
      <dc:date>2021-04-09T06:57:57Z</dc:date>
    </item>
    <item>
      <title>Supercharge Azure ML code development with new VS Code integration</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure Machine Learning (Azure ML) team is excited to announce the release of an enhanced developer experience for ‘compute instance’ and ‘notebooks’ users, through a VS Code integration in the Azure ML Studio! It is now easier than ever to work directly on your Azure ML compute instances from within Visual Studio Code, &lt;STRONG&gt;,&lt;/STRONG&gt; and with full access to a remote terminal, your favorite VS Code extensions, Git source control UI, and a debugger.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="vscode-small.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270923i4A4EF83FBEBCE7EB/image-size/large?v=v2&amp;amp;px=999" role="button" title="vscode-small.gif" alt="vscode-small.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bringing VS Code to Azure Machine Learning&lt;/H2&gt;
&lt;P&gt;The Azure Machine Learning and VS Code teams have been working in collaboration over the past couple of months to better understand user workflows for authoring, editing, and managing code files. The demand for VS Code became clear after speaking to a wide variety of users tasked with managing larger projects and operationalizing their models. Users were eager to continue working on their Azure ML compute resources and retain the development context initially defined through the Studio UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first step to enabling a better editing experience for users was to evaluate what was currently used in VS Code. Users were familiar with extensions such as &lt;A href="https://code.visualstudio.com/docs/remote/ssh" target="_blank" rel="noopener"&gt;Remote-SSH&lt;/A&gt; and , the former used to connect to their remote compute and the latter to author notebook files. The advantage of using Jupyter, JupyterLab, or &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906" target="_blank" rel="noopener"&gt;Azure ML notebooks&lt;/A&gt; was that they could be used for all compute instance types without requiring any additional configuration or networking changes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To enable users to work against their compute instances without requiring SSH or additional networking changes, the Azure ML and VS Code teams built a &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630" target="_blank" rel="noopener"&gt;Notebook-specific compute instance connect experience&lt;/A&gt;. The Azure ML extension was responsible for facilitating the connection between VS Code – Jupyter and the compute instance, taking care of authenticating on the user’s behalf. After a month or so of releasing this capability, it was clear that users were excited about connectivity without SSH and being able to work from directly within VS Code. However, working in the editor implied expectations around being able to use other VS code features such as the remote terminal, debugger, and language server. Users expressed their frustration with being limited to working in a single Notebook file, being unable to view files on the remote server, and not being able to use their preferred extensions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;VS Code Integration: Features&lt;/H2&gt;
&lt;P&gt;Learning from prior releases and talking to users led the Azure ML and VS code teams, to build a &lt;STRONG&gt;complete VS Code experience&lt;/STRONG&gt; for compute instances&amp;nbsp;&lt;STRONG&gt;without using SSH&lt;/STRONG&gt;. Getting started with this experience is trivial – entry points have been integrated within the &lt;A href="http://ml.azure.com" target="_blank" rel="noopener"&gt;Azure ML Studio&lt;/A&gt; in both the Compute Instance and Notebooks tabs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="compute-entry-point.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270905i713BCB471336A361/image-size/large?v=v2&amp;amp;px=999" role="button" title="compute-entry-point.png" alt="Studio UI Compute Entry Point" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Studio UI Compute Entry Point&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="notebooks-entry-point.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270907i6BE6C0DEA9E84831/image-size/large?v=v2&amp;amp;px=999" role="button" title="notebooks-entry-point.png" alt="Studio UI Notebooks Entry Point" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Studio UI Notebooks Entry Point&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Through this VS Code integration customers will now have access to the following features and benefits:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Full integration with &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-files" target="_self"&gt;Azure ML file share and notebooks&lt;/A&gt;:&lt;/STRONG&gt; All file operations in VS Code are fully synced with the Azure ML Studio. For example, if a user drags and drops files from their local machine into VS Code connected to Azure ML, all files will be synced and appear in the Azure ML Studio.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/Docs/editor/versioncontrol#_git-support" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Git UI Experiences&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Fully manage Git repos in Azure ML with the rich VS Code source control UI.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/python/jupyter-support" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Notebook Editor&lt;/STRONG&gt;&lt;/A&gt;: Seamlessly click out from the Azure ML notebooks and continue to work on notebooks in the new native VS code editor.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/python/debugging" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Debugging&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Use the native debugging in VS Code to debug any training script before submitting it to an Azure ML cluster for batch training.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/editor/integrated-terminal" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;VS Code Terminal&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Work in the VS Code terminal that is fully connected to the compute instance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://code.visualstudio.com/docs/editor/extension-gallery" target="_self"&gt;VS Code Extension Support&lt;/A&gt;:&lt;/STRONG&gt; All VS Code extensions are fully supported in VS Code connected to the compute instance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="font-family: inherit;"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security" target="_self"&gt;Enterprise Support&lt;/A&gt;:&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; Work with VS Code securely in private endpoints without additional, complicated SSH and networking configuration. AAD credentials and RBAC are used to establish a secure connection to VNET/private link enabled Azure ML workspaces.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;VS Code Integration: How it Works&lt;/H2&gt;
&lt;P&gt;Clicking out to VS Code will launch a desktop VS Code session which initiates a secondary remote connection to the target compute. Within the remote connection window, the Azure ML extension creates a WebSocket connection between your local VS Code client and the remote compute instance.&lt;/P&gt;
&lt;P&gt;The connected window now provides you with:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Access to the mounted file share, with consistent syncing between what is seen in Jupyter* and the Azure ML Notebooks experience.&lt;/LI&gt;
&lt;LI&gt;Access to the machine’s local SSD in case you would like to clone and manage repos outside of the shared file share.&lt;/LI&gt;
&lt;LI&gt;The ability to manage repositories through the source control UI.&lt;/LI&gt;
&lt;LI&gt;The ability to create, interact and debug running applications.&lt;/LI&gt;
&lt;LI&gt;A remote terminal for executing commands directly against the remote compute.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Below is a high-level overview of the remote connection&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="remote-connect-hl-arch.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270909iF0AB1D7C0143EE92/image-size/large?v=v2&amp;amp;px=999" role="button" title="remote-connect-hl-arch.png" alt="Remote Connection Architecture Diagram (High-Level)" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Remote Connection Architecture Diagram (High-Level)&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This new connect capability and direct integration in the Azure ML Studio creates a better-together experience between Azure ML and VS Code! When working on your machine learning projects you can get started with a notebook in the Azure ML Studio for early data prep and exploratory work, when you’re ready to start fleshing out the rest of your project, work on multiple file types, and use more advanced editing capabilities and VS Code extension, you can seamlessly transition over to working in VS Code. The retained context and file share usage enables you to move bi-directionally (from notebooks to VS Code and vice-versa) without requiring additional work.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;You can initiate the connection to VS Code directly from the Studio UI through either the Compute Instance or Notebook pages. Alternatively, there are routes starting directly within VS Code if you would prefer. Given you have the &lt;A href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;Azure Machine Learning extension&lt;/A&gt; installed, you can find the compute instance in the tree view and right-click on it to connect. You can also invoke the command “Azure ML: Connect to compute instance” and follow the prompts to initiate the connection.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-command.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270910iB2F852D3AA9A8056/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci-command.png" alt="Azure ML extension command" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure ML extension command&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-context-menu.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270911i0915A8CE80FADF31/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci-context-menu.png" alt="Azure ML extension tree view context menu option" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure ML extension tree view context menu option&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more details on how you can get started with this experience, please take a look at our &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-vs-code-remote?tabs=extension" target="_blank" rel="noopener"&gt;public documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Both the Azure ML and VS Code extension teams are always looking for feedback on our current experiences and what we should work on next. If there is anything you would like us to prioritize, please feel free to suggest so via our &lt;A href="https://github.com/microsoft/vscode-tools-for-ai/issues" target="_blank" rel="noopener"&gt;GitHub repo&lt;/A&gt;; if you would like to provide more general feedback, please &lt;A href="https://aka.ms/aml-ext-survey" target="_blank" rel="noopener"&gt;fill out our survey&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Apr 2021 15:25:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129</guid>
      <dc:creator>Sid_Unnithan</dc:creator>
      <dc:date>2021-04-08T15:25:26Z</dc:date>
    </item>
    <item>
      <title>Eleven more languages are generally available for Azure Neural Text-to-Speech</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored with Lihui Wang, Gang Wang, Xinfeng Chen, Qinying Liao, Garfield He and Sheng Zhao&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt; (Neural TTS), part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. Today we are happy to announce that 6 new languages were added to the Neural TTS portfolio with 12 voices available, and the 10 voices in preview with 5 languages are now generally available.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Six new languages&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;12 voices from 6 brand-new languages, with one male and one female voice in each language are available now: Nia in ‘cy-GB’ Welsh (United Kingdom), Aled in ‘cy-GB’ Welsh (United Kingdom), Rosa in ‘en-PH’ English (Philippines), James in ‘en-PH’ English (Philippines), Charline in ‘fr-BE’ French (Belgium), Gerard in ‘fr-BE’ French (Belgium), Dena in ‘’nl-BE Dutch (Belgium), Arnaud in ‘nl-BE’ Dutch (Belgium), Polina in ‘uk-UA’ Ukranian (Ukraine), Ostap in ‘uk-UA’ Ukranian (Ukraine), Uzma in ‘ur-PK’ Urdu (Pakistan), and Asad in ‘ur-PK’ Urdu (Pakistan).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the samples below or try them with your own text in our&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;product demo on Azure&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;&lt;STRONG&gt;Audio sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;cy-GB&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Welsh (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;cy-GB-NiaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Mae'r ysgol ar agor drwy'r wythnos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210223-0759551088/TTS-NiaNeural-Waves-Shortsentence-00002.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=b3aatrBz8UIddVDkuFSOc9N2KlGs2dtcIVHxd5HwShU%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;cy-GB&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Welsh (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;cy-GB-AledNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Mae Bangor 8 milltir o Gaernarfon.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210222-0949572958/TTS-AledNeural-Waves-GeneralSentence-00009.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=REoamfTScigj6NINsMxw6XxclSTCD5CyTNJ14CUVvrA%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;en-PH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;English (Philippines)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;en-PH-RosaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;I need to buy a mineral water.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1015010108/TTS-RosaNeural-Waves-GeneralSentence-00058.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=nalnHnLzKCpXrVqEcGz6RBuG1BTwEbyfhk0iRjXEUz4%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;en-PH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;English (Philippines)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;en-PH-JamesNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Let's meet tomorrow at 6 pm.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1019419930/TTS-JamesNeural-Waves-GeneralSentence-00031.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=yrVpXhdhhk25%2FjYhZCJc45aKfrwp1C%2FY8QdHUyhILWU%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;fr-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;French (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;fr-BE-CharlineNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;On se voit pour dîner demain ?&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1008227048/TTS-CharlineNeural-Waves-GeneralSentence-00016.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=nmDuOtQXSZQtgOuPxxuaVDRT4Ljct9CEg7Ee54OA8qE%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;fr-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;French (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;fr-BE-GerardNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Il existe 2 manières de participer.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1018241597/TTS-GerardNeural-Waves-GeneralSentence-00036.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=T698dE7j4VlnIzFh%2Fxu%2BMMPjkOjAG6a5yCuSrT4Mtcs%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;nl-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Dutch (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;nl-BE-DenaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Hij is al urenlang online.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1041306573/TTS-DenaNeural-Waves-GeneralSentence-00008.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=43Wt1OVaATmHPCAhdBOsuJebK01KUV959Bfg%2Ft0giL8%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;nl-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Dutch (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;nl-BE-ArnaudNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Ik vond vele kabouters in hun tuin.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1048103107/TTS-ArnaudNeural-Waves-GeneralSentence-00038.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=HmbZ58lyEUc57Tq6vwNOptr4avEoTc5d3HdLxt20ZuE%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;uk-UA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Ukrainian (Ukraine)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;uk-UA-PolinaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;У Києві завершили реставрацію Андріївської церкви.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0931272540/TTS-PolinaNeural-Waves-GeneralSentence-00042.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=cqZZm%2BwrPWhCXjrDS5UJQFP%2FTHGfDoFesOHVEhxdXhQ%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;uk-UA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Ukrainian (Ukraine)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;uk-UA-OstapNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Загалом було оновлено 4 395 км доріг.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0936496995/TTS-OstapNeural-Waves-GeneralSentence-00012.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=Kc4hCGYi9j9fX4rbq%2FLi9Q%2F0DOu637zzYBbreRXAdaI%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;ur-PK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Urdu (Pakistan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;ur-PK-UzmaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P class="lia-align-right"&gt;واہ! کیا ہی خوبصورت نظارہ ہے۔&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0948509228/TTS-UzmaNeural-Waves-GeneralSentence-00017.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=FeW4%2FPk%2FUWHVPPV6dh6nTIze41cxNoUg3%2B7FgFmeE70%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;ur-PK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Urdu (Pakistan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;ur-PK-AsadNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P class="lia-align-right"&gt;سورج گرہن پاکستانی وقت کے مطابق شام 6 بج کر 34 منٹ پر شروع ہو گا۔&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0954494762/TTS-AsadNeural-Waves-GeneralSentence-00043.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=0op69NuG02bH%2BgMOk7dCzCwW%2Fvl%2FJqyy4E29Aj73DoI%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this update, Azure TTS now supports 60 languages in total. Check out the figure below for more details or see the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener noreferrer"&gt;full language list.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="GarfieldHe_0-1616656804430.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/266926iBAE9E05A59FF3DB9/image-size/large?v=v2&amp;amp;px=999" role="button" title="GarfieldHe_0-1616656804430.png" alt="GarfieldHe_0-1616656804430.png" /&gt;&lt;/span&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Five preview languages now GA&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Last November, we released 5 languages in preview with 10 voices for &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;European locales&lt;/A&gt;. Now these languages become generally available in all&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;Neural TTS regions/datacenters&lt;/A&gt;. Azure TTS now has full support for all 24 European languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;&lt;STRONG&gt;Audio samples&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;et-EE-AnuNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/et-EE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;et-EE- KertNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Ametlikku meetodit sellise pettuse avastamiseks ei olegi olemas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/et-EE%20Kert.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;ga-IE- OrlaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Tá an scoil sa mbaile ar oscailt arís inniu.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ga-IE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;ga-IE- ColmNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Ritheadh próiseas comhairliúcháin faoin scéal sa bhfómhar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/ga-IE%20Colm.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lt-LT- OnaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lt-LT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lt-LT- LeonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Aišku, anksčiau ar vėliau paaiškės tos priežastys.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lt-LT%20Leonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lv-LV-EveritaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lv-LV.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lv-LV- NilsNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Aizvadīto gadu uzņēmums noslēdzis ar 6,3 miljonu eiro zaudējumiem.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lv-LV%20Nils.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;mt-MT-GraceNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/mt-MT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;mt-MT- JosephNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Anki tfajjel tal-primarja jaf li l-popolazzjoni tikber fejn hemm il-prosperità.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/mt-MT%20Joseph.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;How to integrate with the new voices/languages&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure TTS now covers more languages of the world. Applications using Azure TTS can be easily updated to support coverage of additional countries. All the voices are available in the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech" target="_blank" rel="noopener"&gt;same API&lt;/A&gt;&amp;nbsp;and &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-cpp" target="_blank" rel="noopener"&gt;SDK&lt;/A&gt;. Developers can just edit the voice and locale list in their applications to use these new voices without code logic modifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For instance, &lt;A href="https://docs.microsoft.com/en-us/microsoftteams/create-a-phone-system-auto-attendant" target="_blank" rel="noopener"&gt;Microsoft Teams auto attendants&lt;/A&gt;&amp;nbsp;lets people call your organization and navigate a menu system to speak to the right department, call queue, person, or an operator. It uses Azure TTS to render customized prompts as a call response. To better localize audio prompts for different countries, Teams has been integrated with the new TTS languages to serve more customers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Want more languages or voices?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you find that the language which you are looking for is not supported by Azure TTS, reach out to your sales representative, or file a support ticket on Azure. We'd be happy to&amp;nbsp;engage and discuss how to support the languages you need. You can also customize and create a brand voice with your speech data for your apps using the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/custom-neural-voice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt; feature.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Tell us your experiences!&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let us know how you are using or plan to use Neural TTS voices in this &lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener"&gt;form&lt;/A&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and look forward to developing more compelling services together with you for the developers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Add voice to your app in 15 minutes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Explore the available voices in this demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk#optional-change-the-language-and-bot-voice" target="_blank" rel="noopener"&gt;Build a voice-enabled bot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;Deploy Azure TTS voices on prem with Speech Containers&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Build your custom voice&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Mar 2021 15:21:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2021-03-31T15:21:04Z</dc:date>
    </item>
    <item>
      <title>Azure Speech and Batch Ingestion</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539</link>
      <description>&lt;H1&gt;Getting started with Azure Speech and Batch Ingestion Client&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Batch Ingestion Client is as a zero-touch transcription solution for all your audio files in your Azure Storage. If you are looking for a quick and effortless way to transcribe your audio files or even explore transcription, without writing any code, then this solution is for you. Through an ARM template deployment, all the resources necessary to seamlessly process your audio files are set-up and set in motion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Why do I need this?&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Getting started with any API requires some amount of time investment in learning the API, understanding its scope, and getting value through trial and error. In order to speed up your transcription solution, for those of you that do not have the time to invest in getting to know our API or related best practices, we created an ingestion layer (a client for batch transcription) that will help you set-up a full blown, scalable and secure transcription pipeline without writing any code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a smart client in the sense that it implements best practices and optimized against the capabilities of the Azure Speech infrastructure. It utilizes Azure resources such as Service Bus and Azure Functions to orchestrate transcription requests to Azure Speech Services from audio files landing in your dedicated storage containers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Before we delve deeper into the set-up instructions, let us have a look at the architecture of the solution this ARM template builds.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="architecture.png" style="width: 741px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265483iFB98720C64CE6685/image-size/large?v=v2&amp;amp;px=999" role="button" title="architecture.png" alt="architecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The diagram is simple and hopefully self-explanatory. As soon as files land in a storage container, the Grid Event that indicates the complete upload of a file is filtered and pushed to a Service bus topic. Azure Functions (time triggered by default) pick up those events and act, namely creating Tx requests using the Azure Speech Services batch pipeline. When the Tx request is successfully carried out an event is placed in another queue in the same service bus resource. A different Azure Function triggered by the completion event starts monitoring transcription completion status and copies the actual transcripts in the containers from which the audio file was obtained. This is it. The rest of the features are applied on demand. Users can choose to apply analytics on the transcript, produce reports or redact, all of which are the result of additional resources being deployed through the ARM template. The solution will start transcribing audio files without the need to write any code. If -however- you want to customize further this is possible too. The code is available in this &lt;A href="http://cognitive-services-speech-sdk/samples/batch%20at%20master · Azure-Samples/cognitive-services-speech-sdk (github.com)" target="_self"&gt;repo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The list of best practices we implemented as part of the solution are:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Optimized the number of audio files included in each transcription with the view of achieving the shortest possible SAS TTL.&lt;/LI&gt;
&lt;LI&gt;Round Robin around selected regions in order to distribute load across available regions (per customer request)&lt;/LI&gt;
&lt;LI&gt;Retry logic optimization to handle smooth scaling up and transient HTTP 429 errors&lt;/LI&gt;
&lt;LI&gt;Running Azure Functions economically, ensuring minimal execution cost&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Setup Guide&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following guide will help you create a set of resources on Azure that will manage the transcription of audio files.&lt;/P&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/" target="_blank" rel="noopener"&gt;Azure Account&lt;/A&gt;&amp;nbsp;as well as an&amp;nbsp;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank" rel="noopener"&gt;Azure Speech key&lt;/A&gt;&amp;nbsp;is needed to run the Batch Ingestion Client.&lt;/P&gt;
&lt;P&gt;Here are the detailed steps to create a speech resource:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;You need to create a Speech Resource with a paid (S0) key. The free key account will not work. Optionally for analytics you can create a Text Analytics resource too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Click on +Create Resource&lt;/LI&gt;
&lt;LI&gt;Type Speech and&lt;/LI&gt;
&lt;LI&gt;Click Create on the Speech resource.&lt;/LI&gt;
&lt;LI&gt;You will find the subscription key under&amp;nbsp;&lt;STRONG&gt;Keys&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;You will also need the region, so make a note of that too.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;To test your account, we suggest you use&amp;nbsp;&lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_blank" rel="noopener"&gt;Microsoft Azure Storage Explorer&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;The Project&lt;/H3&gt;
&lt;P&gt;Although you do not need to download or do any changes to the code you can still download it from GitHub:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;git clone https://github.com/Azure-Samples/cognitive-services-speech-sdkcd cognitive-services-speech-sdk/samples/batch/transcription-enabled-storage&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make sure that you have downloaded the&amp;nbsp;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/transcription-enabled-storage/Setup/ArmTemplate.json" target="_blank" rel="noopener"&gt;ARM Template&lt;/A&gt;&amp;nbsp;from the repository.&lt;/P&gt;
&lt;H2&gt;Batch Ingestion Client Setup Instructions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Click on&amp;nbsp;&lt;STRONG&gt;+Create Resource&lt;/STRONG&gt;&amp;nbsp;on&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&amp;nbsp;as shown in the following picture and type ‘&amp;nbsp;&lt;EM&gt;template deployment&lt;/EM&gt;&amp;nbsp;’ on the search box.&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image001.png" style="width: 986px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265484i95D5BC1C83CDF228/image-size/large?v=v2&amp;amp;px=999" role="button" title="image001.png" alt="image001.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 2. Click on&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;Button on the screen that appears as shown below.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;3. You will be creating the relevant Azure resources from the ARM template provided. Click on click on the ‘Build your own template in the editor’ link and wait for the new screen to load.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You will be loading the template via the&amp;nbsp;&lt;STRONG&gt;Load file&lt;/STRONG&gt;&amp;nbsp;option. Alternatively, you could simply copy/paste the template in the editor.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;Saving the template will result in the screen below. You will need to fill in the form provided. It is important that all the information is correct. Let us look at the form and go through each field.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image011.png" style="width: 640px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265489iC005F591513C0D18/image-size/large?v=v2&amp;amp;px=999" role="button" title="image011.png" alt="image011.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Please use short descriptive names in the form for your resource group. Long resource group names may result in deployment error&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;First pick the Azure Subscription Id within which you will create the resources.&lt;/LI&gt;
&lt;LI&gt;Either pick or create a resource group. [It would be better to have all the resources within the same resource group so we suggest you create a new resource group].&lt;/LI&gt;
&lt;LI&gt;Pick a region [May be the same region as your Azure Speech key].&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following settings all relate to the resources and their attributes&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Give your transcription enabled storage account a name [you will be using a new storage account rather than an existing one]. If you opt to use existing one then all existing audio files in that account will be transcribed too.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following 2 steps are optional. Omitting them will result in using the base model to obtain transcripts. If you have created a Custom Speech model using &lt;A href="https://speech.microsoft.com/" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;, then:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter optionally your primary Acoustic model ID&lt;/LI&gt;
&lt;LI&gt;Enter optionally your primary Language model ID&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you want us to perform Language identification on the audio prior to transcription you can also specify a secondary locale. Our service will check if the language on the audio content is the primary or secondary locale and select the right model for transcription.&lt;/P&gt;
&lt;P&gt;Transcripts are obtained by polling the service. We acknowledge that there is a cost related to that. So, the following setting gives you the option to limit that cost by telling your Azure Function how often you want it to fire.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter the polling frequency [There are many scenarios where this would be required to be done couple of times a day]&lt;/LI&gt;
&lt;LI&gt;Enter locale of the audio [you need to tell us what language model we need to use to transcribe your audio.]&lt;/LI&gt;
&lt;LI&gt;Enter your Azure Speech subscription key and Locale information&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-SPOILER&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;If you plan to transcribe large volume of audio (say millions of files) we propose that you rotate the traffic between regions. In the Azure Speech Subscription Key text box you can put as many keys separated by column ';'. In is important that the corresponding regions (Again separated by column ';') appear in the Locale information text box. For example if you have 3 keys (abc, xyz, 123) for east us, west us and central us respectively then lay them out as follows 'abc;xyz;123' followed by 'east us;west us;central us'&lt;/LI-SPOILER&gt;
&lt;P&gt;The rest of the settings related to the transcription request. You can read more about those in our&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/batch-transcription" target="_blank" rel="noopener"&gt;docs&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select a profanity option&lt;/LI&gt;
&lt;LI&gt;Select a punctuation option&lt;/LI&gt;
&lt;LI&gt;Select to Add Diarization [all locales]&lt;/LI&gt;
&lt;LI&gt;Select to Add Word level Timestamps [all locales]&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Do you need more than transcription? Do you need to apply Sentiment to your transcript? Downstream analytics are possible too, with Text Analytics Sentiment and Redaction being offered as part of this solution too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to perform Text Analytics please add those credentials.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add Text analytics key&lt;/LI&gt;
&lt;LI&gt;Add Text analytics region&lt;/LI&gt;
&lt;LI&gt;Add Sentiment&lt;/LI&gt;
&lt;LI&gt;Add data redaction&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you want to further analytics we could map the transcript json we produce to a DB schema.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter SQL DB credential login&lt;/LI&gt;
&lt;LI&gt;Enter SQL DB credential password&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can feed that data to your custom PowerBI script or take the scripts included in this repository. Follow this &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/batch-ingestion-client/PowerBI/guide.md" target="_self"&gt;guide&lt;/A&gt; for setting it up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Press&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;to trigger the resource creating process. It typically takes 1-2 mins. The set of resources are listed below.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;If a Consumption Plan (Y1) was selected for the Azure Functions, make sure that the functions are synced with the other resources (see&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/azure-functions/functions-deployment-technologies#trigger-syncing" target="_blank" rel="noopener"&gt;this&lt;/A&gt;&amp;nbsp;for further details).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To do so, click on your StartTranscription function in the portal and wait until your function shows up:&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_8" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;Do the same for the FetchTranscription function&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-SPOILER&gt;&lt;EM&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Until you restart both Azure functions you may see errors.&lt;/LI-SPOILER&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Running the Batch Ingestion Client&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upload audio files to the newly created audio-input container (results are added to json-result-output and test-results-output containers). Once they are done you can test your account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Use&amp;nbsp;&lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_blank" rel="noopener"&gt;Microsoft Azure Storage Explorer&lt;/A&gt;&amp;nbsp;to test uploading files to your new account. The process of transcription is asynchronous. Transcription usually takes half the time of the audio track to be obtained. The structure of your newly created storage account will look like the picture below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image015.png" style="width: 297px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265492i29661E0FD6C8203E/image-size/large?v=v2&amp;amp;px=999" role="button" title="image015.png" alt="image015.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are several containers to distinguish between the various outputs. We suggest (for the sake of keeping things tidy) to follow the pattern and use the audio-input container as the only container for uploading your audio.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Customizing the Batch Ingestion Client&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By default, the ARM template uses the newest version of the Batch Ingestion Client which can be found in this repository. If you want to customize this further clone the &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch" target="_self"&gt;repo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To publish a new version, you can use Visual Studio, right click on the respective project, click publish and follow the instructions.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN&gt;What to build next&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Now that you’ve successfully implemented a speech to text scenario, you can build on this scenario. Take a look at the insights&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/blog/using-text-analytics-in-call-centers/" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt; provides from the transcript like caller and agent sentiment, key phrase extraction and entity recognition.&amp;nbsp; If you’re looking specifically to solve for Call centre&amp;nbsp;transcription, review &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/call-center-transcription" target="_blank" rel="noopener"&gt;this docs page&lt;/A&gt; for further guidance&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 20:21:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539</guid>
      <dc:creator>Panos Periorellis</dc:creator>
      <dc:date>2021-03-30T20:21:49Z</dc:date>
    </item>
    <item>
      <title>Re: Form Recognizer  now reads more languages, processes IDs and invoices, trains on tables, and mor</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2224071#M199</link>
      <description>&lt;P&gt;When will the client library support latest API versions? When I build a model with the labeling tool I get an error saying that the model is built with different version than the client is targeting.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 20 Mar 2021 00:00:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2224071#M199</guid>
      <dc:creator>Woocash</dc:creator>
      <dc:date>2021-03-20T00:00:39Z</dc:date>
    </item>
    <item>
      <title>Microsoft named a Leader in 2021 Gartner Magic Quadrant for Cloud AI Developer Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Gartner CAIDS MQ graphic 2021.png" style="width: 957px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265536i164046209606D6C1/image-size/large?v=v2&amp;amp;px=999" role="button" title="Gartner CAIDS MQ graphic 2021.png" alt="Gartner CAIDS MQ graphic 2021.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Gartner recently released their Magic Quadrant for 2021 Cloud AI Developer Services. Microsoft is in the Leaders quadrant and was positioned highest on the ability to execute axis. You can download a complimentary copy of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services&lt;/A&gt; for the full report. In this post, we’ll look at why, we think, Microsoft was placed in the Leaders quadrant.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;According to the report, “Gartner defines cloud AI developer services (CAIDS) as cloud-hosted or containerized services/models that allow development teams and business users to leverage artificial intelligence models via APIs, SDKs, or applications without requiring deep data science expertise.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;They specifically evaluated services with capabilities in language, vision, and automated machine learning. For Azure, this includes Azure Cognitive Services, Azure Machine Learning, and Microsoft’s conversational AI portfolio. For Power Platform, this includes AI Builder and Power Virtual Agents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;“Gartner believes that enterprise development teams will increasingly incorporate models built using AI and ML into applications. These services currently fall into three main functional areas: language, vision and automated machine learning (autoML). The language services include natural language understanding (NLU), conversational agent frameworks, text analytics, sentiment analysis and other capabilities. The vision services include image recognition, video content analysis and optical character recognition (OCR). The autoML services include automated tools that will let developers do data preparation, feature engineering, create models, deploy, monitor and manage models without having to learn data science.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure AI enables you to develop AI applications on your terms, apply AI responsibly, and deploy mission-critical AI solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Develop on your terms&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure AI allows you to build AI applications in your preferred software development language and deploy in the cloud, on-premises, or at the edge. Azure provides options for data scientists and developers of all skill levels – no machine learning expertise required. See the Microsoft section of the &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gartner.com%2Freprints%2F%3Fid%3D1-25C36W9W%26ct%3D210226%26st%3Dsb&amp;amp;data=04%7C01%7Cporourke%40microsoft.com%7C469a633697b348243cc408d8ea532151%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516990224565039%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=Cssf6FbIfAsj%2Bcae4zKvyhqavuiPL3IOckTgE8A4Wjc%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Apply AI responsibly&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure offers tools and resources to help you understand, protect, and control your AI solutions, including responsible ML toolkits, responsible bot development guidelines, tools to help you explain model behavior and test for fairness, and more. We never use your data to train our models, and we keep principles like inclusiveness, fairness, transparency, and accountability in mind at every stage of our AI research, development, and deployment. See the Microsoft section of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Deploy mission-critical solutions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure lets you access the same AI services that power products like Microsoft Teams and Xbox, and that are proven at global scale. Azure leads the industry when it comes to security, and we have the most comprehensive compliance coverage of any cloud service provider. We continue to innovate and our Microsoft Research team has made significant breakthroughs, most recently reaching human parity with &lt;A href="https://blogs.microsoft.com/ai/azure-image-captioning/" target="_blank" rel="noopener"&gt;image captioning&lt;/A&gt;. See the Microsoft section of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whether you’re a professional developer or data scientist, or just getting started, we hope that you can use Azure AI services to build impactful AI-powered applications that solve complex problems and enhance customer experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 19 Mar 2021 16:17:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100</guid>
      <dc:creator>maddybutzbach</dc:creator>
      <dc:date>2021-03-19T16:17:37Z</dc:date>
    </item>
    <item>
      <title>The Tenets of Knowledge Management Adoption</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Knowledge Management Systems and Adoption Key Tenets:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Sonia M. Ang – CSA&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In today’s competitive business environment organizations need a clear roadmap that aligns with their training needs and focuses on both short-and long-term objectives.&amp;nbsp; Knowledge Management is a tool that can be implemented to identify and appeal to the training needs of the modern employee.&amp;nbsp; A successful Enterprise Learning system goes beyond the organizational level and allows employees access to information and knowledge, thus creating better alignment at the enterprise level.&amp;nbsp;&amp;nbsp; A well-designed Knowledge Management System can break down barriers by providing partners, clients, and customers with not only essential information and robust training, but also opportunities to promote and inform your organization’s products and services.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Employee retention and customer satisfaction are essential to any organizations long-term success.&amp;nbsp;&amp;nbsp; When considering your organization’s ROI, the benefits of Enterprise Learning are threefold:&amp;nbsp; retention (customers and employees), satisfaction, and improved profitability.&amp;nbsp; Enterprise Learning can be leveraged to provide better development and training opportunities thus promoting a feeling of empowerment amongst your team.&amp;nbsp; Enterprise Learning promotes efficiencies in training, in turn your organization will recognize the cost-saving benefits due to lower employee turnover and customer churn.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Enterprise Knowledge Management Adoption&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;1. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Management Sponsorship and a COE&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;An executive sponsor provides that critical link between executive leadership and project management and helps support projects successfully to their completion at their expected performance.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Sponsorship of the Enterprise Knowledge Management Project will enhance your product base and create opportunities for your company in the rapidly advancing Knowledge Management sector.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Executive sponsorship will align with our company’s strategy to be experts in Knowledge Management.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Microsoft will be at the forefront of Knowledge Management as organizations rush to adopt more efficient and effective training strategies that better align to the modern worker's needs.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;2. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Beyond Training but Execution&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Training programs at the company level are often too extensive in complexity and the amount of learning material can be downright overwhelming.&amp;nbsp; Training professionals need to venture beyond traditional learning and utilize learning strategies that support the needs of today’s modern professionals.&amp;nbsp; Microlearning, for example, provides a host of benefits to your organization in terms of increased learner participation, memorability of courses, and quick deployment with easy updates to your digital learning assets.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Knowledge Management allows for learning concepts to be extracted from larger training programs and utilized as checklists and instructional videos that are easily accessible at a moment’s notice.&amp;nbsp; When learning material is successfully mined it allows the learning process to be refined, thus challenging concepts can be identified and made easier to process and understand.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;3. Collaborative and Social Learning&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;How does your organization support collaboration, problem-solving and the co-creation of knowledge?&amp;nbsp; A successful Enterprise Learning Strategy will push an organization forward by improving collaboration via robust communities and meaningful discussion forums.&amp;nbsp;&amp;nbsp; Building professional learning communities on platforms like Slack and Microsoft TEAMS can help break the walls down in organizations where ideas and knowledge may be siloed.&amp;nbsp; In addition to collaborative discussion platforms that give all community members a voice, the development of Expert Finders can be an important catalyst for creating a robust culture of collaboration.&amp;nbsp; Hidden ideas and knowledge will organically emerge from people in your organization that may hold previously unearthed niche expertise.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;4. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Where is the Data?&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Generating data for analysis is the foundation of a robust Enterprise Learning strategy.&amp;nbsp; The key to success is to build data collection directly into technical systems supported by a centralized knowledge repository.&amp;nbsp; Data collection in the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;education sphere has traditionally focused on summative assessments like exams that are intended to measure a learner’s mastery of objectives.&amp;nbsp; Using the specifications in Enterprise Learning provides another set of metrics by allowing an organization to track formative assessments, such as data and social learning activity.&amp;nbsp; For example, these metrics allows an organization to collect new data, adding another layer of data to your knowledge repository to support the creation of more meaningful formative and summative assessments.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;5. Reusable Content and Reproducibility&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;How do organizations move past the traditional content development models where bulky training manuals were the norm?&amp;nbsp; Learning data and insights help organizations to build learning solutions that are reusable and provide 24/7 access to learning.&amp;nbsp; Additionally, a &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Headless CMS&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt; allows your organization’s content creators to move away from the rigid templates that most traditional learning management systems utilize. This means content creators have more control over the quality of their content, and this streamlines the process of creating unique digital learning experiences for both your employees and customers.&amp;nbsp; Perhaps your organization releases a short instructional video on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;company’s&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rules and regulations.&amp;nbsp; This learning asset has value as both a stand-alone content object in a knowledge base as well as a learning module in a more extensive communications course.&amp;nbsp; Content creators in your organization, ranging from instructional designers to marketing professionals, often create multiple versions of the same material.&amp;nbsp; Creating content that is reusable and not redundant is more efficient as you are not reinventing the wheel.&amp;nbsp; Additionally, you are not burdened with trying to maintain and keep up-to-date multiple versions of the same learning assets.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;6. “Findability” of Learning Assets: Digitization and Technology&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The successful implementation of KM tools can enhance the user experience through successful mining and classification of metadata.&amp;nbsp; The power of KM is powerful in that it provides a taxonomy, ontology, and a finely tuned search system. A well-designed metadata strategy will take advantage of your learning assets which may include courses, webinars, professional learning communities (PLCs) and subject matter experts in your organization.&amp;nbsp; This myriad of data exits across multiple systems in your organization. Using metadata that is contextualized and consistent will ensure that your data is findable.&amp;nbsp; Taking it one-step further ontologies can be tapped to support a complete network of shareable and reusable knowledge across a domain for each unique user.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Summary&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:312,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The adoption of an Enterprise Management system can reframe your organization’s knowledge and learning infrastructure.&amp;nbsp; Modern learners need to consume information quickly thus allowing them to efficiently apply and master essential skills and strategies.&amp;nbsp; A well-engineered Enterprise Learning Plan that focuses on empowering your community will result in higher learner engagement, enhanced workplace skills and a robust culture-of-knowledge across your organization.&amp;nbsp; Curiosity will drive success rather than traditional command and control training strategies.&amp;nbsp; Empower your community by giving them the autonomy to self-direct their own learning.&amp;nbsp; As a result, your organization will enjoy increased engagement, enhanced workforce skills and, in turn, a robust learning culture will grow and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;flourish&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 19 Mar 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091</guid>
      <dc:creator>Sonia Ang</dc:creator>
      <dc:date>2021-03-19T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Re: Form Recognizer  now reads more languages, processes IDs and invoices, trains on tables, and mor</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2220214#M195</link>
      <description>&lt;P&gt;is there a manual about Draw Region feature?&lt;/P&gt;</description>
      <pubDate>Thu, 18 Mar 2021 15:19:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2220214#M195</guid>
      <dc:creator>francescofucci</dc:creator>
      <dc:date>2021-03-18T15:19:52Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2216582#M194</link>
      <description>&lt;P&gt;Found a solution! See&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/cognitive-services/qna-maker-bot-returns-both-short-and-long-answer-in-test-in-web/m-p/1882908" target="_blank"&gt;https://techcommunity.microsoft.com/t5/cognitive-services/qna-maker-bot-returns-both-short-and-long-answer-in-test-in-web/m-p/1882908&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Mar 2021 08:42:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2216582#M194</guid>
      <dc:creator>julianportelli</dc:creator>
      <dc:date>2021-03-17T08:42:28Z</dc:date>
    </item>
    <item>
      <title>Extract Data from PDFs using Form Recognizer with Code or Without!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299</link>
      <description>&lt;P&gt;Form Recognizer is a powerful tool to help build a variety of document machine learning solutions. It is one service however its made up of many prebuilt models that can perform a variety of essential document functions. You can even custom train a model using supervised or unsupervised learning for tasks outside of the scope of the prebuilt models! Read more about all the features of Form Recognizer&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/overview?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;. In this example we will be looking at how to use one of the prebuilt models in the Form Recognizer service that can extract the data from a PDF document dataset. Our documents are invoices with common data fields so we are able to use the prebuilt model without having to build a customized model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sample Invoice:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="invoice.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264367i59D7C6180F17623E/image-size/large?v=v2&amp;amp;px=999" role="button" title="invoice.png" alt="invoice.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After we take a look at how to do this with Python and Azure Form Recognizer, we will take a look at how to do the same process with no code using the Power Platform services: Power Automate and Form Recognizer built into AI Builder. In the Power Automate flow we are scheduling a process to happen every day. What the process does is look in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;blob container to see if there is new files to be processed. If there is new files to be processed it gets all blobs from the container and loops through each blob to extract the PDF data using a prebuilt AI builder step. Then it deletes the processed document from the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container. See what it looks like below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Power Automate Flow:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="flowaibuild.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264369i241F53F6E21A228F/image-size/large?v=v2&amp;amp;px=999" role="button" title="flowaibuild.png" alt="flowaibuild.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Prerequisites for Python&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Account&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3028733&amp;amp;WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Sign up here!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.anaconda.com/products/individual" target="_blank" rel="nofollow noopener"&gt;Anaconda&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and/or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://code.visualstudio.com/Download?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;VS Code&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Basic programming knowledge&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;A id="user-content-prerequisites-for-power-automate" class="anchor" href="https://github.com/cassieview/FormRecognizer#prerequisites-for-power-automate" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Prerequisites for Power Automate&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Power Automate Account&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/power-automate/sign-up-sign-in/?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Sign up here!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;No programming knowledge&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Process PDFs with Python and Azure Form Recognizer Service&lt;/FONT&gt;&lt;/H2&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-services" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-services" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create Services&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First lets create the Form Recognizer Cognitive Service.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Go to &lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;portal.azure.com&lt;/A&gt; to create the resource or click this&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" target="_blank" rel="nofollow noopener"&gt;link&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Now lets create a storage account to store the PDF dataset we will be using in containers. We want two containers, one for the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;PDFs and one for the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;unprocessed PDF.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/common/storage-account-create?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Azure Storage Account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Create two containers:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-upload-data" class="anchor" href="https://github.com/cassieview/FormRecognizer#upload-data" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Upload data&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upload your dataset to the Azure Storage&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder since they need to be processed. Once processed then they would get moved to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The result should look something like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="storageaccounts.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264370i359B312F0A1B3D34/image-size/large?v=v2&amp;amp;px=999" role="button" title="storageaccounts.png" alt="storageaccounts.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Create Notebook and Install Packages&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we have our data stored in Azure Blob Storage we can connect and process the PDF forms to extract the data using the Form Recognizer Python SDK. You can also use the Python SDK with local data if you are not using Azure Storage. This example will assume you are using Azure Storage.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;Create a new&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://code.visualstudio.com/docs/python/jupyter-support#_create-or-open-a-jupyter-notebook?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Jupyter notebook in VS Code&lt;/A&gt;.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Install the Python SDK&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;!p&lt;SPAN class="pl-s1"&gt;ip&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;install&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;ai&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;formrecognizer&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;pre&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Then we need to import the packages.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;os&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;core&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;exceptions&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;ResourceNotFoundError&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;ai&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;formrecognizer&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;FormRecognizerClient&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;core&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;credentials&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;AzureKeyCredential&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;os&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;uuid&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;storage&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;BlobServiceClient&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;BlobClient&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;ContainerClient&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;__version__&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-formrecognizerclient" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-formrecognizerclient" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create FormRecognizerClient&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Update the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;with the values from the service you created. These values can be found in the Azure Portal under the Form Recongizer service you created under the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Keys and Endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on the navigation menu.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-s1"&gt;endpoint&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;your endpoint&amp;gt;"&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;key&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;your key&amp;gt;"&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;We then use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to connect to the service and create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.formrecognizerclient?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;FormRecongizerClient&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-s1"&gt;form_recognizer_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;FormRecognizerClient&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;endpoint&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;AzureKeyCredential&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;key&lt;/SPAN&gt;))&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;print_results&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;helper function for use later to print out the results of each invoice.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;def&lt;/SPAN&gt; &lt;SPAN class="pl-en"&gt;print_result&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob_name&lt;/SPAN&gt;):
    &lt;SPAN class="pl-k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;idx&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;in&lt;/SPAN&gt; &lt;SPAN class="pl-en"&gt;enumerate&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;):
        &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"--------Recognizing invoice {}--------"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob_name&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"VendorName"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Vendor Name: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"VendorAddress"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Vendor Address: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerName"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Name: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerAddress"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Address: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerAddressRecipient"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Address Recipient: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceId"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Id: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceDate"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Date: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceTotal"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Total: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"DueDate"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Due Date: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-connect-to-blob-storage" class="anchor" href="https://github.com/cassieview/FormRecognizer#connect-to-blob-storage" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Connect to Blob Storage&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Now lets&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-python?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;connect to our blob storage containers&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;BlobServiceClient&lt;/A&gt;. We will use the client to connect to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;containers that we created earlier.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-c"&gt;# Create the BlobServiceClient object which will be used to get the container_client&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;connect_str&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;Get connection string from the Azure Portal&amp;gt;"&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;BlobServiceClient&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;from_connection_string&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;connect_str&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Container client for raw container.&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get_container_client&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"raw"&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Container client for processed container&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;processed_container_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get_container_client&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"processed"&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Get base url for container.&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;invoiceUrlBase&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;primary_endpoint&lt;/SPAN&gt;
&lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrlBase&lt;/SPAN&gt;)&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;HINT: If you get a "HttpResponseError: (InvalidImageURL) Image URL is badly formatted." error make sure the proper permissions to access the container are set. Learn more about Azure Storage Permissions&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/common/storage-auth?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-extract-data-from-pdfs" class="anchor" href="https://github.com/cassieview/FormRecognizer#extract-data-from-pdfs" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Extract Data from PDFs&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are ready to process the blobs now! Here we will call&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;list_blobs&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to get a list of blobs in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container. Then we will loop through each blob, call the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;begin_recognize_invoices_from_url&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to extract the data from the PDF. Then we have our helper method to print the results. Once we have extracted the data from the PDF we will&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;upload_blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;delete_blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from the raw folder.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"&lt;SPAN class="pl-cce"&gt;\n&lt;/SPAN&gt;Processing blobs..."&lt;/SPAN&gt;)

&lt;SPAN class="pl-s1"&gt;blob_list&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;list_blobs&lt;/SPAN&gt;()
&lt;SPAN class="pl-k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;in&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_list&lt;/SPAN&gt;:
    &lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;f'&lt;SPAN class="pl-s1"&gt;&lt;SPAN class="pl-kos"&gt;{&lt;/SPAN&gt;invoiceUrlBase&lt;SPAN class="pl-kos"&gt;}&lt;/SPAN&gt;&lt;/SPAN&gt;/&lt;SPAN class="pl-s1"&gt;&lt;SPAN class="pl-kos"&gt;{&lt;/SPAN&gt;blob.name&lt;SPAN class="pl-kos"&gt;}&lt;/SPAN&gt;&lt;/SPAN&gt;'&lt;/SPAN&gt;
    &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt;)
    &lt;SPAN class="pl-s1"&gt;poller&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;form_recognizer_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;begin_recognize_invoices_from_url&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Get results&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;poller&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;result&lt;/SPAN&gt;()

    &lt;SPAN class="pl-c"&gt;# Print results&lt;/SPAN&gt;
    &lt;SPAN class="pl-en"&gt;print_result&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;name&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Copy blob to processed&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;processed_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;upload_blob&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;blob_type&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;overwrite&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;True&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Delete blob from raw now that its processed&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;delete_blob&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;)&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;Each result should look similar to this for the above invoice example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="pythonresult.png" style="width: 546px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264371iA3116FA86E09C32D/image-dimensions/546x131?v=v2" width="546" height="131" role="button" title="pythonresult.png" alt="pythonresult.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The prebuilt invoices model worked great for our invoices so we don't need to train a customized Form Recognizer model to improve our results. But what if we did and what if we didn't know how to code?! You can still leverage all this awesomeness in AI Builder with Power Automate without writing any code. We will take a look at this same example in Power Automate next.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A id="user-content-use-form-recognizer-with-ai-builder-in-power-automate" class="anchor" href="https://github.com/cassieview/FormRecognizer#use-form-recognizer-with-ai-builder-in-power-automate" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Use Form Recognizer with AI Builder in Power Automate&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can achieve these same results using no code with Form Recognizer in AI Builder with Power Automate. Lets take a look at how we can do that.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-a-new-flow" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-a-new-flow" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create a New Flow&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Log in to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://flow.microsoft.com/" target="_blank" rel="nofollow noopener"&gt;Power Automate&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;then click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Scheduled Cloud Flow&lt;/CODE&gt;. You can trigger Power Automate flows in a variety of ways so keep in mind that you may want to select a different trigger for your project.&lt;/LI&gt;
&lt;LI&gt;Give the Flow a name and select the schedule you would like the flow to run on.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-connect-to-blob-storage-1" class="anchor" href="https://github.com/cassieview/FormRecognizer#connect-to-blob-storage-1" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Connect to Blob Storage&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;New Step&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;List blobs&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Step
&lt;UL&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;List blobs&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the ellipsis click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create new connection&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if your storage account isn't already connected
&lt;UL&gt;
&lt;LI&gt;Fill in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Connection Name&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Storage Account name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(the account you created), and the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Storage Account Access Key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which you can find in the resource keys in the Azure Portal)&lt;/LI&gt;
&lt;LI&gt;Then select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Once the storage account is selected click the folder icon on the right of the list blobs options. You should see all the containers in the storage account, select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your flow should look something like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="connecttoblob.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264373iCE2CBD509DA1B8DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="connecttoblob.png" alt="connecttoblob.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Loop Through Blobs to Extract the Data&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click the plus sign to create a new step&lt;/LI&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Control&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the textbox and a list of blob properties will appear. Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;value&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;LI&gt;Next select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;add action&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from within the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Flow step.&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Click the textbox and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property. This will get the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;File content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;that we will pass into the Form Recognizer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Process and save information from invoices&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Click the plus sign and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;add new action&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Process and save information from invoices&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the textbox and then the property&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;File Content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;section&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Copy Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Repeat the add action steps&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Copy Blob&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Source url&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;text box and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Destination blob path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and put&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;/processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for the processed container&lt;/LI&gt;
&lt;LI&gt;Select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Overwrite?&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;dropdown and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Yes&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if you want the copied blob to overwrite blobs with the existing name.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Delete Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Repeat the add action steps&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Delete Blob&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;text box and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;block should look something like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="applytoeachblock.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264375i8CB49A960730C7EF/image-size/large?v=v2&amp;amp;px=999" role="button" title="applytoeachblock.png" alt="applytoeachblock.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Save and Test the Flow
&lt;UL&gt;
&lt;LI&gt;Once you have completed creating the flow save and test it out using the built in test features that are part of Power Automate.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This prebuilt model again worked great on our invoice data. However if you have a more complex dataset, use the AI Builder to label and create a customized machine learning model for your specific dataset. Read more about how to do that&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/tutorial-ai-builder?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A id="user-content-conclusion" class="anchor" href="https://github.com/cassieview/FormRecognizer#conclusion" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Conclusion&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We went over a fraction of the things that you can do with Form Recognizer so don't let the learning stop here! Check out the below highlights of new Form Recognizer features that were just announced and the additional doc links to dive deeper into what we did here.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-additional-resources" class="anchor" href="https://github.com/cassieview/FormRecognizer#additional-resources" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Additional Resources&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/blog/new-features-for-form-recognizer-now-available/#:~:text=New%20features%20for%20Form%20Recognizer%20now%20available.%20Neta,tables%20from%20documents%20to%20accelerate%20their%20business%20processes." target="_blank" rel="nofollow noopener"&gt;New Form Recognizer Features&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/overview?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;What is Form Recognizer?&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=preview%2Cv2-1&amp;amp;pivots=programming-language-python?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Quickstart: Use the Form Recognizer client library or REST API&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/tutorial-ai-builder?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Tutorial: Create a form-processing app with AI Builder&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;AI Developer Resources page&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=TX7XwwIG5lw&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=5&amp;amp;t=6s" target="_self"&gt;AI Essentials video including Form Recognizer&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Mar 2021 16:43:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299</guid>
      <dc:creator>cassieview</dc:creator>
      <dc:date>2021-03-16T16:43:56Z</dc:date>
    </item>
    <item>
      <title>Re: Computer Vision for spatial analysis at the Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/2203207#M192</link>
      <description>&lt;P&gt;&lt;LI-USER uid="979236"&gt;&lt;/LI-USER&gt;&amp;nbsp;recent announcement of Azure Percept, capable of running Spatial Analysis suites edge environments much better. Is Azure Percept DK good for production deployments or as a dev kit, it is only meant for prototyping? Secondly, do you have any roadmap for adding&amp;nbsp;intel Movidius Myriad x devices to the list of recommended devices for the spatial analysis? The price of NVIDIA T4 based hardware is heavy on the business case for spatial analysis applications. Also, such servers are not suited to many edge environments.&lt;/P&gt;</description>
      <pubDate>Thu, 11 Mar 2021 19:26:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/2203207#M192</guid>
      <dc:creator>hussnain_ahmed</dc:creator>
      <dc:date>2021-03-11T19:26:27Z</dc:date>
    </item>
    <item>
      <title>Model understanding with Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Mehrnoosh Sameki, Program Manager, Azure Machine Learning.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Overview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Model interpretability and fairness are part of the ‘Understand’ pillar of Azure Machine Learning’s Responsible ML offerings. As machine learning becomes ubiquitous in decision-making from the end-user utilizing AI-powered applications to the business stakeholders using models to make data-driven decisions, it is necessary to provide tools at scale for model transparency and fairness.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" style="width: 626px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262968iCD258909811687E8/image-size/large?v=v2&amp;amp;px=999" role="button" title="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;Explaining a machine learning model and performing fairness assessment is important for the following users:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data scientists and model evaluators - At training time to help them to understand their model predictions and assess the fairness of their AI systems, enhancing their ability to debug and improve models.&lt;/LI&gt;
&lt;LI&gt;Business stakeholders and auditors - To build trust with defined ML models and deploy them more confidently.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Customers like Scandinavian Airlines (SAS) and Ernst &amp;amp; Young (EY) put interpretability and fairness packages to the test to be able to deploy models more confidently.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://customers.microsoft.com/en-us/story/781802-sas-travel-transportation-azure-machine-learning" target="_blank" rel="noopener"&gt;SAS used interpretability to confidently identify fraud&lt;/A&gt; in its EuroBonus loyalty program. SAS data scientists could debug and verify model predictions using interpretability. They produced explanations about model behavior that gave stakeholders confidence in the machine learning models and assisted with meeting regulatory requirements.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://customers.microsoft.com/doclink/809460-ey-partner-professional-services-azure-machine-learning-fairlearn" target="_blank" rel="noopener"&gt;EY utilized fairness assessment and unfairness mitigation&lt;/A&gt; techniques with real mortgage adjudication data to improve the fairness of loan decisions from having an accuracy disparity of 7 percent between men and women to less than 0.5 percent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are releasing enhanced experiences and feature additions for the interpretability and fairness toolkits in Azure Machine Learning, to empower more ML practitioners and teams to build trust with AI systems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="6" color="#000000"&gt;Model understanding using interpretability and fairness toolkits&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These two toolkits can be used together to understand model predictions and mitigate unfairness. For this demonstration, we shall take a look at a loan allocation scenario. Let’s say that the label indicates whether each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Tech blog diagram.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262695i2806C3064A5DEAB3/image-size/large?v=v2&amp;amp;px=999" role="button" title="Tech blog diagram.jpg" alt="Tech blog diagram.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;FONT size="5"&gt;Identify your model's fairness issues&lt;/FONT&gt;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our revamped fairness dashboard can help uncover the harm of allocation which leads to the model unfairly allocating loans among different demographic groups. The dashboard can additionally uncover harm of quality of service which leads to a model failing to provide the same quality of service to some people as they do to others. Using the fairness dashboard, you can identify if our model treats different demographics of sex unfairly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Dashboard configurations&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When you first load the fairness dashboard, you need to configure it with desired settings, including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;selection of your sensitive demographic of choice (e.g., sex&lt;A href="#_ftn1" target="_self" name="_ftnref1"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&lt;/A&gt;)&lt;/LI&gt;
&lt;LI&gt;model performance metric (e.g., accuracy)&lt;/LI&gt;
&lt;LI&gt;fairness metric (e.g., demographic parity difference).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Model assessment view&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After setting the configurations, you will land on a model assessment view where you can see how the model is treating different demographic groups.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-fairness-toolkit/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation model’s fairness with the AzureML’s fairness toolkit - Microsoft Channel 9 Video"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our fairness assessment shows an 18.3% disparity in the selection rate (or demographic group difference). According to that insight, 18.3% more males are receiving qualifications for loan acceptance compared to females. Now that you’ve seen some unfairness indicators in your model, you can next use our interpretability toolkit to understand why your model is making such predictions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Diagnose your model’s predictions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new revamped interpretability dashboard greatly improves the user experience of the previous dashboard. In the loan allocation scenario, you can understand how a model treats female loan applicants differently than male loan applicants using the interpretability toolkit:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-interpretability/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation with interpretability toolkit - Microsoft Channel 9 Video"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Dataset cohort creation:&lt;/STRONG&gt; You can slice and dice your data into subgroups (e.g., female vs. male vs. unspecified) and investigate or compare your model’s performance and explanations across them.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="font-family: inherit;"&gt;Model performance tab:&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; With the predefined female and male cohorts, we can observe the different prediction distributions between males and female cohorts, with females experiencing higher probability rates of being rejected for a loan.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dataset explorer tab:&lt;/STRONG&gt; Now that you have seen in the model performance tab how females are rejected at a higher rate than males, you can use the data explorer tab to observe the ground truth distribution between males and females. &amp;nbsp;For males, the ground truth data is well balanced between those receiving a rejection or approval whereas, for females, the ground truth data is heavily skewed towards rejection thereby explaining how the model could come to associate the label ‘female’ with rejection.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Aggregate feature importance tab:&lt;/STRONG&gt; Now we observe which top features contribute to the model’s overall prediction (also called global explanations) towards loan rejection. We sort our top feature importances by the Female cohort, which indicates that while the feature for “Sex” is the second most important feature to contribute towards the model’s predictions for individuals in the female cohort, they do not influence how the model makes predictions for individuals in the male cohort. The dependence plot for the feature “Sex” also shows that only the female group has positive feature importance towards the prediction of being rejected for a loan, whereas the model does not look at the feature “Sex” for males when making predictions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Individual feature importance &amp;amp; What-If tab:&lt;/STRONG&gt; Drilling deeper into the model’s prediction for a specific individual (also called local explanations), we look at the individual feature importances for only the Female cohort. We select an individual who is at the threshold of being accepted for a loan by the model and observe which features contributed towards her prediction of being rejected. “Sex” is the second most important feature contributing towards the model prediction for this individual. The Individual Conditional Expectation (ICE) plot calculates how a perturbation for a given feature value across a range can impact its prediction. We select the feature “Sex” and can see that if this feature had been flipped to male, the probability of being rejected is lowered drastically. We create a new hypothetical What-If point from this individual data point and switch only the “Sex” from female to male, and observe that without changing any other feature related to financial competency, the model now predicts that this individual will have their loan application accepted.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Once some potential fairness issues are observed and diagnosed, you can move to mitigate those unfairness issues.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Mitigate unfairness issues in your model&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The unfairness mitigation part is powered by the &lt;A href="http://fairlearn.org" target="_blank" rel="noopener"&gt;Fairlearn&lt;/A&gt; open-source package which includes two types of mitigation algorithms: &lt;A href="https://arxiv.org/pdf/1610.02413.pdf" target="_blank" rel="noopener"&gt;postprocessing algorithms&lt;/A&gt; (&lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.ThresholdOptimizer" target="_blank" rel="noopener"&gt;ThresholdOptimizer&lt;/A&gt;) and &lt;A href="https://arxiv.org/pdf/1803.02453.pdf" target="_blank" rel="noopener"&gt;reduction algorithms&lt;/A&gt; (&lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch" target="_blank" rel="noopener"&gt;GridSearch&lt;/A&gt;, &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.ExponentiatedGradient" target="_blank" rel="noopener"&gt;ExponentiatedGradient&lt;/A&gt;). Both operate as “wrappers” around any standard classification or regression algorithm. &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch" target="_blank" rel="noopener"&gt;GridSearch&lt;/A&gt;, for instance, treats any standard classification or regression algorithm as a black box, and iteratively (a) re-weight the data points and (b) retrain the model after each re-weighting. After 10 to 20 iterations, this process results in a model that satisfies the constraints implied by the selected fairness metric while maximizing model performance. &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.ThresholdOptimizer" target="_blank" rel="noopener"&gt;ThresholdOptimizer&lt;/A&gt; on the other hand takes as its input a scoring function that underlies an existing classifier and identifies a separate threshold for each group to optimize the performance metric, while simultaneously satisfying the constraints implied by the selected fairness metric.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The fairness dashboard also enables the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches. Bypassing the dominated models of GridSearch for instance, you can see the unmitigated model on the upper right side (with the highest accuracy and highest demographic parity difference) and can click on any of the mitigated models to observe them further. This allows you to examine trade-offs between performance and fairness.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="model fairness comparison.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262696i74D857109F63B0D3/image-size/large?v=v2&amp;amp;px=999" role="button" title="model fairness comparison.png" alt="model fairness comparison.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Comparing results of unfairness mitigation&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After applying the unfairness mitigation, we go back to the interpretability dashboard and compare the unmitigated model with the mitigated model. In the figure below, we see a more even probability distribution for the female cohort for the mitigated model on the right:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model interpretability before after.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262697i9D6C42F08A512188/image-size/large?v=v2&amp;amp;px=999" role="button" title="Model interpretability before after.jpg" alt="Model interpretability before after.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Revisiting the fairness assessment dashboard, we also see a drastic decrease in demographic parity difference from 18.8% (unmitigated model) to 0.412% (mitigated model):&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model fairness before after.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262698i60AA2DC0494DE9BF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Model fairness before after.jpg" alt="Model fairness before after.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Saving model explanations and fairness metrics to Azure Machine Learning Run History&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning’s (AzureML) interpretability and fairness toolkits can be run both locally and remotely. If run locally, the libraries will not contact any Azure services. Alternatively, you can run the algorithms remotely on AzureML compute and log all the explainability and fairness information into AzurML’s run history via the AzureML SDK to save and share them with other team members or stakeholders in AzureML studio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AML explanation.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262699i455620FEE621210A/image-size/large?v=v2&amp;amp;px=999" role="button" title="AML explanation.png" alt="AML explanation.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure ML’s Automated ML supports explainability for its best model as well as on-demand explainability for any other models generated by Automated ML.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Learn more&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/responsible-ai" target="_blank" rel="noopener"&gt;Explore this scenario&lt;/A&gt; and other sample notebooks in the Azure Machine Learning sample notebooks GitHub.&lt;/P&gt;
&lt;P&gt;Learn more about the &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml" target="_blank" rel="noopener"&gt;Responsible ML offerings in Azure Machine Learning&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability" target="_blank" rel="noopener"&gt;interpretability&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml" target="_blank" rel="noopener"&gt;fairness&lt;/A&gt; concepts and see documentation on how-to guides for using &lt;A href="https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability" target="_blank" rel="noopener"&gt;interpretability&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-fairness-aml" target="_blank" rel="noopener"&gt;fairness&lt;/A&gt; in Azure Machine Learning.&lt;/P&gt;
&lt;P&gt;Get started with a &lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;free trial of the Azure Machine Learning service&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="#_ftnref1" target="_blank" rel="noopener" name="_ftn1"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&lt;/A&gt; This dataset is from the 1994 US Census Bureau Database where “sex” in the data was limited to binary categorizations.&lt;/P&gt;</description>
      <pubDate>Tue, 16 Mar 2021 19:05:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141</guid>
      <dc:creator>mithigpe</dc:creator>
      <dc:date>2021-03-16T19:05:14Z</dc:date>
    </item>
    <item>
      <title>Re: Form Recognizer  now reads more languages, processes IDs and invoices, trains on tables, and mor</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2198062#M190</link>
      <description>&lt;P&gt;So we're 9 months since the last release of the Form Recognizer (2.0). Should we be expecting a 12 month cadence for releases? Like most organizations we can't use "preview" software in production.&lt;/P&gt;</description>
      <pubDate>Wed, 10 Mar 2021 01:44:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/bc-p/2198062#M190</guid>
      <dc:creator>smf723</dc:creator>
      <dc:date>2021-03-10T01:44:58Z</dc:date>
    </item>
    <item>
      <title>Advance Resource Access Governance for AML</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520</link>
      <description>&lt;DIV class="lia-message-subject-wrapper lia-component-subject lia-component-message-view-widget-subject-with-options"&gt;
&lt;DIV class="MessageSubject"&gt;
&lt;DIV class="MessageSubjectIcons "&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-message-body-wrapper lia-component-message-view-widget-body"&gt;
&lt;DIV id="bodyDisplay" class="lia-message-body"&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;Access control is a fundamental building block for enterprise customers, where protecting assets at various levels is absolutely necessary to ensure that only the relevant people with certain positions of authority are given access with different privileges. This is more so prevalent in machine learning, where data is absolutely essential in building ML models, and companies are highly cautious about how the data is accessed and managed, especially with the introduction of GDPR.&amp;nbsp; We are seeing an increasing number of customers seeking for explicit control of not only the data, but various stages of the machine learning lifecycle, starting from experimentation and all the way to operationalization. Assets such as generated models, cluster creation and model deployment require to be governed to ensure that controls are in line with the company’s policy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Azure traditionally provides Role-based Access Control [1], which helps to manage access to resources; who can access these and what they can access.&amp;nbsp; This is primarily achieved via the concept of roles.&amp;nbsp; A role defines a collection of permissions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Existing Roles in AML&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning provides three roles [3] for enterprise customers to provision as a coarse-grained access control, which is designed for simplicity in mind.&amp;nbsp; The first role (Owner) has the highest level of privileges, that grants full control of the workspace. &amp;nbsp;This is followed by a Contributor, which is a bit more restricted role that prevents users from changing role assignment. Reader having the most restrictive permissions and is typically read or view only (see figure 1 below).&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rbac-3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262754iAB96302DF84B6F1D/image-size/large?v=v2&amp;amp;px=999" role="button" title="rbac-3.png" alt="rbac-3.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;Figure 1 - Existing AML roles&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;What we have found with the customers is that while Coarse-grained Access Control immensely simplifies the management of the roles, and works quite well with a small team, primarily working in the experimentation environment.&amp;nbsp; However, when a company decides to operationalize the ML work, especially in the enterprise space, these roles become far too broad, and too simplistic.&amp;nbsp;&amp;nbsp; In the enterprise space, the deployment tends to have several stages (such as dev, test, pre-prod, prod, etc.), and require various skillset (data scientist, data engineer, etc.) with a greater control in each stage.&amp;nbsp; For example, a Data Scientist may not operate in the production environment. A Data Engineer can only provision resources and should not have the ability to commission and decommission training clusters. Such governance policies are crucial for companies to be enforced and monitored to maintain integrity of their business and IT processes.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Unfortunately, such requirements cannot be captured with the existing roles. Enterprise needs a better mechanism to define policies for various assets in AML to satisfy their business specific requirements.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This is where the new exciting feature of advanced Role-based Access Control really shines. It is based on Fine-grained Access Control at component level (see figure 2) with a number of pre-built out of the box roles, plus the ability to create custom roles that can capture more complex governance access processes and enforce them. &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Advance Fine-grained Role-based Access Control&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new advance Role-based Access Control feature of AML is really going to solve many of the enterprise problems around the ability to restrict or grant user permissions for various components.&amp;nbsp; Azure AML currently defines 16 components&amp;nbsp; with varying permissions.&lt;/P&gt;
&lt;BR /&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aml-components.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260370iC2929C8E66E43458/image-size/large?v=v2&amp;amp;px=999" role="button" title="aml-components.png" alt="aml-components.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 2 - Components Level RBAC&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Each component defines a list of actions such as read, write, delete, etc.&amp;nbsp; These actions can then be amalgamated together to create a custom specific role. To illustrate this with an example of a list of actions currently available for a Datastore component (see Figure 3 below).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="policy-1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262756iB724D52251F4900B/image-size/large?v=v2&amp;amp;px=999" role="button" title="policy-1.png" alt="policy-1.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 3 - Datastore Actions&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A datastore along with Dataset are important concepts in Azure Machine Learning, &amp;nbsp;since they provide access to various data sources, with lineage and tracking ability.&amp;nbsp; Many enterprises have built global Datalake that contain terabytes of data which can contain highly sensitive information. Companies are quite protective of who can access these data, along with various business justifications for how these data are being accessed/used. It is therefore imperative that a tighter access control is mandated for a specific role, such as a Data Engineer to accomplish such a task.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fortunately, AML advance access control provide custom roles.&amp;nbsp; to cater for their company specific access control, which may be a hybrid of these roles.&amp;nbsp; For such requirements, Azure caters for custom roles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Custom Role&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom role [4] allows creation of Fine-grained Access Control on various components, such as the workspace, datastore, etc.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Can be any combination of data or control plane actions that AzureML+AISC support.&lt;/LI&gt;
&lt;LI&gt;Useful for creating scoped roles to a specific action like an MLOps Engineer&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These controls are defined in a JSON policy definition, for example.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "Name": "Data Scientist",
    "IsCustom": true,
    "Description": "Can run experiment but can't create or delete datastore.",
    "Actions": ["*"],
    "NotActions": [
        "Microsoft.MachineLearningServices/workspaces/*/delete",
        "Microsoft.MachineLearningServices/workspaces/ datastores/write",
        "Microsoft.MachineLearningServices/workspaces/ datastores /delete",
        “Microsoft.MachineLearningServices/workspaces/datastores/write”,
        "Microsoft.Authorization/*/write"
    ],
    "AssignableScopes": [
        "/subscriptions/&amp;lt;subscription_id&amp;gt;/resourceGroups/&amp;lt;resource_group_name&amp;gt;/providers/Microsoft.MachineLearningServices/workspaces/&amp;lt;workspace_name&amp;gt;"
    ]
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The above code defines a Data Scientist who can run an experiment but cannot create or delete a Datastore. This role can be created using the Azure CLI (az role definition create -role-definition filename), however, the CLI ML extension needs to be installed first. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--1063417684"&gt;Role Operation Workflow&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In an organization, the following activities are to be undertaken by various role owners.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sub admin comes in for an enterprise and requests Amlcompute quota&lt;/LI&gt;
&lt;LI&gt;They create an RG and a workspace for a specific team, and also set workspace level quota&lt;/LI&gt;
&lt;LI&gt;The team lead (aka workspace admin), comes in and starts creating compute within the quota that the sub admin defined for that workspace&lt;/LI&gt;
&lt;LI&gt;Data Scientist comes in and uses the compute that workspace admin created for them (clusters or instances).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1424095149"&gt;Roles for Enterprise&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;AML provides a single environment for doing end-to-end experimentation to operationalization.&amp;nbsp; For a start-up this is really useful as they tend to operate in a very agile manner, where many iterations can happen in a short period of time and having the ability to quickly move from ideation to production really reduces their cycle time.&amp;nbsp; Unfortunately, this may not be the case for the enterprise customers, where they would typically be using either two or three environments to carry out their production workload such as: Dev, QA and Prod.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Dev is used to do the experimentation, while QA is catered for satisfying various functional and non-functional requirements, followed by Prod for deployment into the production for consumer usage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The environments would also have various roles to carry out different activities, such as Data Scientist, Data Engineer and MLOps Engineer (see figure 8 below).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="role-4.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262758iDDE94C2D4806F6B4/image-size/large?v=v2&amp;amp;px=999" role="button" title="role-4.png" alt="role-4.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 8 - Enterprise Roles&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Data Scientist normally operates in the Dev environment and has full access to all the permissions related to carrying out experiments, such as provisioning training clusters, building models, etc. While some permissions are granted in the QA environment, primarily related model testing and performance, and very minimal access to the Prod environment, mainly telemetry (see below Table 1).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Data Engineer on the other hand primarily operates in the Build and QA environment. The main focus is related to the data handling, such as data loading, doing some data wrangling, etc.&amp;nbsp; They have restricted access in the Prod environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_10-1614737951507.png" style="width: 864px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260367iECC3583AFC821F5F/image-dimensions/864x345?v=v2" width="864" height="345" role="button" title="Mufajjul_Ali_10-1614737951507.png" alt="Mufajjul_Ali_10-1614737951507.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Table 1 - Role/environment Matrix&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An MLOps Engineer has some permission in the Dev environment, but full permissions in the QA and Prod.&amp;nbsp; This is because an MlOPs Engineer is tasked with building the pipeline, gluing things together, and ultimately deploying models in production.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The interesting part is how do all these roles and environments and other components fit together in Azure to provide the much-needed access governance for the enterprise customers.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--517044038"&gt;Enterprise AML Roles Deployment&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It is impressive for enterprises to be able to model these complex roles/environments mapping as shown in Table one.&amp;nbsp; Fortunately these can be achieved in Azure using a combination of AD groups, roles and resource groups.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_11-1614737951524.png" style="width: 724px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260368i241720729954A0AF/image-dimensions/724x470?v=v2" width="724" height="470" role="button" title="Mufajjul_Ali_11-1614737951524.png" alt="Mufajjul_Ali_11-1614737951524.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 9 - Enterprise AML Roles Deployment&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fundamentally, Azure Active Directory groups play a major part in gluing all these components together to make it functional.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First step is to group the users specific to role(s) in a “Role AD group” for a given persona (DS, DE, etc.,). Then assign roles with various RBAC actions (Data Writer, MLContributor, etc.) to this AD group.&amp;nbsp; All these users will now inherit the permissions specific to this role(s).&amp;nbsp; Multiple AD groups will be created for different persona roles.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Separate AD groups (‘AD group for Environment’) are created for each environment (i.e. Dev, QA and Prod), the Role AD Groups are added to these Environment AD groups.&amp;nbsp; This creates a mapping of users belonging to a specific role persona with given permissions to an environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The ‘AD group for Environment’ is then assigned to a resource group, which contains a specific AML Workspace.&amp;nbsp; This ensures that the role permissions assigned to users will be enforced at the workspace level.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we have discussed the new advance Role-based Access Control, and how it is being applied in a complex enterprise with various environments with different user personas.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The important point to note is the flexibility that comes with this new feature which can operate at any of the 16 AML components and be able to define Fine-grained Access Control for each through custom roles, and out of box four roles which should be sufficient for the majority of the customers.&amp;nbsp;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2 id="toc-hId-1970468795"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;References&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;[1]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/role-based-access-control/overview" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/role-based-access-control/overview&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[2]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-gb/services/machine-learning/" target="_blank" rel="noopener noreferrer"&gt;https://azure.microsoft.com/en-gb/services/machine-learning/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[3]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[4]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additional Links:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;A tabindex="-1" title="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" target="_blank" rel="noreferrer noopener"&gt;https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;co-author:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/195402" target="_blank" rel="noopener"&gt;@Nishank Gupt and @John Wu&lt;/A&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 11 Mar 2021 09:28:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520</guid>
      <dc:creator>mufy</dc:creator>
      <dc:date>2021-03-11T09:28:08Z</dc:date>
    </item>
    <item>
      <title>Improving collaboration and productivity in Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Sharon Xu Program Manager, Azure Notebooks.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today we are very proud to announce the next set of productivity features and improvements for the notebook experience. Since &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009" target="_blank" rel="noopener"&gt;we announced the GA release&lt;/A&gt; of Notebooks in Azure Machine Learning (Azure ML), &lt;SPAN&gt;we have learned a lot from our customers&lt;/SPAN&gt;. Over the past few months, we have incrementally improved the notebook experience while simultaneously contributing back to &lt;A href="https://devblogs.microsoft.com/python/bringing-the-power-of-the-monaco-editor-to-nteract/" target="_blank" rel="noopener"&gt;the open source nteract project&lt;/A&gt;. The Azure ML team recently released a robust set of new functionalities designed to improve data scientist productivity and collaboration in Azure ML Notebooks.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Data scientist &amp;amp; Developer Productivity&lt;/H2&gt;
&lt;P&gt;We have spoken to several data scientists and developers to fully understand the additional features needed to improve productivity while developing machine learning projects. From feedback, we have found that users constantly needed the following enhancements to speed up their workflow: a clear indication that a cell has finished running, a way to templatize common code excerpts, a way to check variable contents, and more. The following list is a culmination of the most highly requested productivity features:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Cell Status Bar. The status bar located in each cell indicates the cell state: whether a cell has been queued, successfully executed, or run into an error. The status bar also displays the execution time of the last run.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#explore-variables-in-the-notebook" target="_blank" rel="noopener"&gt;Variable Explorer.&lt;/A&gt; provides a quick glance into the data type, size, and contents of your variables and dataframes, allowing for quicker and simpler debugging.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_5-1614125127829.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257237iB8D804F9493E69DB/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_5-1614125127829.png" alt="abeomor_5-1614125127829.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 1: (1) Cell status bar (2) Variable explorer&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Notebook snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_4-1614125123189.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257236iFDAB2F193BD4424E/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_4-1614125123189.png" alt="abeomor_4-1614125123189.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 2 (1) Notebook snippets panel, showing all useful snippets&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/visualstudio/intellicode/overview" target="_blank" rel="noopener"&gt;IntelliCode&lt;/A&gt;. IntelliCode provides intelligent auto-completion suggestions using an ML algorithm that analyzes the context of your notebook code. IntelliCode suggestions are designated with a star.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_3-1614125118045.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257235i8B5BCB3FB1176BA5/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_3-1614125118045.png" alt="abeomor_3-1614125118045.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 3: IntelliCode in Azure ML Notebooks&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Keyboard shortcuts with full Jupyter parity. Azure ML now supports all the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#useful-keyboard-shortcuts" target="_blank" rel="noopener"&gt;keyboard shortcuts available in Jupyter&lt;/A&gt; and more.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#navigate-with-a-toc" target="_blank" rel="noopener"&gt;Table of Contents.&lt;/A&gt; For large notebooks, the Table of Contents panel then allows you to navigate to the desired section. The sections of the notebook are designated by the Markdown headers.&lt;/LI&gt;
&lt;LI&gt;Markdown Side-by-side Editor in Notebooks. Within each notebook, the new side-by-side editor allows you to view the rendered results of your Markdown cells directly in your notebook editing.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_2-1614125111883.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257234iAE1377BE0412F326/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_2-1614125111883.png" alt="abeomor_2-1614125111883.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 4: &amp;nbsp;(1) Table of content pane (2) Markdown side by side&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Collaboration and Sharing&lt;/H2&gt;
&lt;P&gt;An increasing number of data scientists and developers are creating notebooks collaboratively and sharing these notebooks across their team We heard feedback that most users feel like they are missing adequate tools to edit notebooks simultaneously or share their notebooks with a broader audience. Users often resort to screen shares and calls to complete or present work within a notebook. We recently just release a few new features to help combat some of these issues:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Co-editing (preview). Co-editing makes collaboration easier than ever. The notebook can now be shared by sending the notebook URL, allowing multiple users to edit the notebook in real-time.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_1-1614125073756.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257232i6D39A594790A76B9/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_1-1614125073756.png" alt="abeomor_1-1614125073756.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 5: Live Co-editing in Azure ML&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#export-a-notebook" target="_blank" rel="noopener"&gt;Export Notebook as Python, LaTeX or HTML&lt;/A&gt;. When you feel satisfied with the results from your notebook and ready to present to your colleagues, you can export the notebook to various formats for easy sharing. LaTeX, HTML, and .py are currently supported.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_0-1614125062082.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257231iFD15BDAC14ECD764/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_0-1614125062082.png" alt="abeomor_0-1614125062082.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 6: Export Notebooks as Python and more in Azure ML&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Get Started Today&lt;/H2&gt;
&lt;P&gt;To begin using these features in Azure ML Notebooks, you will first need to &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?tabs=python" target="_blank" rel="noopener"&gt;create an Azure Machine Learning&lt;/A&gt;. Your Azure ML workspace serves as your one-stop-shop for all your machine learning needs, where you can create and share all your machine learning assets.&lt;/P&gt;
&lt;P&gt;Once you have your workspace set up, you can get started using &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks" target="_blank" rel="noopener"&gt;all the features in the Azure ML Notebooks experience&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt; The notebooks experience aims to provide you with an integrated suite of data science tools. Users can start working with a highly productive and collaborative Jupyter notebook editor directly in their workspace as well as quickly access other ML assets such as experiment details, datasets, models, and more.&lt;/P&gt;
&lt;P&gt;With the addition of this host of features, notebooks in Azure ML aims to improve every aspect of your development needs – collaboration, code editing, debugging. Give these features a try and &lt;A href="https://www.surveymonkey.com/r/D9RHYPV?hostName=ml.azure" target="_self"&gt;leave your feedback&lt;/A&gt;. The feedback provided by our community is what drives us to improve and build new features.&amp;nbsp; As we continue to push out new releases, keep an eye out, because the team has a few more exciting features coming out soon.&lt;/P&gt;</description>
      <pubDate>Wed, 10 Mar 2021 18:28:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906</guid>
      <dc:creator>abeomor</dc:creator>
      <dc:date>2021-03-10T18:28:13Z</dc:date>
    </item>
    <item>
      <title>Integrating AI: Prototyping a No-Code solution with Power Apps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-prototyping-a-no-code-solution-with-power-apps/ba-p/2189550</link>
      <description>&lt;P&gt;&lt;SPAN data-key="598477b2276e441ba5d5f43dc3367887"&gt;You might have the cutting edge AI features but it is hard to know how useful it will be before letting your users beta test your prototype. You can &lt;STRONG&gt;build fast&lt;/STRONG&gt;, &lt;STRONG&gt;deploy&lt;/STRONG&gt; and &lt;STRONG&gt;deliver&lt;/STRONG&gt; your app and iterate without writing any code, using &lt;A href="https://powerapps.microsoft.com/en-us/ai-builder/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;AI Builder&lt;/A&gt; and &lt;A href="https://powerplatform.microsoft.com/en-us/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;This article explains what&amp;nbsp;&lt;A title="Power Platform Overview" href="https://powerplatform.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt;&amp;nbsp;is, as well as go through a &lt;STRONG&gt;step by step&lt;/STRONG&gt; process to create an application that detects objects from photos using &lt;A title="Explore Power Apps for free for 30 Days" href="https://docs.microsoft.com/en-us/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Apps&lt;/STRONG&gt;&lt;/A&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;&lt;A title="Use AI Builder in Power Apps" href="https://docs.microsoft.com/powerapps/use-ai-builder?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;AI Builder&lt;/A&gt;. &lt;/STRONG&gt;Check out the video below to see the app we will build to detect different &lt;A title="What is Mixed Reality" href="https://docs.microsoft.com/windows/mixed-reality/discover/mixed-reality?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Mixed Reality&lt;/A&gt; Headsets such as HoloLens version 1 and 2 Augmented Reality and Virtual Reality headsets and their hand controllers.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Yonet_0-1615005261042.gif" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261357iF46BD002163A4704/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Yonet_0-1615005261042.gif" alt="Yonet_0-1615005261042.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;What is Power Platform?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;&lt;A href="https://powerplatform.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt; is a set of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;tools,&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;API&lt;/STRONG&gt;'s and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;SDK&lt;/STRONG&gt;'s that helps you &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;analyze your data&lt;/STRONG&gt; and build &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;automations,&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;applications&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;virtual agents &lt;/STRONG&gt;with or without having to write any code.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerPlatform.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231026iEAFC2816C368547F/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerPlatform.png" alt="powerPlatform.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;What are Power Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;A title="Power Apps " href="https://powerapps.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Apps&lt;/A&gt;, allows you to create applications with a drag and drop UI and easy integration of your data and 3rd party APIs through connectors.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;A &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="8fd8f66effb84ecab4f17ad1733a3956"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;connector&lt;/STRONG&gt;&lt;/A&gt; is a proxy or a wrapper around an API that allows the underlying service to talk to Microsoft Power Automate, Microsoft Power Apps, and Azure Logic Apps. It provides a way for users to connect their accounts and leverage a set of pre-built &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;actions&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;triggers&lt;/STRONG&gt; to build their apps and workflows. For example, you can use the&amp;nbsp;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/twitter/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="605f68c94edb4795a3483232cf113704"&gt;Twitter connector&lt;/A&gt; to get tweet data and visualize it in a dashboard or use the&amp;nbsp;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/twilio/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="6758248276c04f958e3929872b0dd8f3"&gt;Twilio connector&lt;/A&gt; to send your users text messages without having to be an expert in Twitter or Twilio APIs or having to write a line of code.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;Check out the&lt;A href="https://docs.microsoft.com/en-us/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="153c605aa3bd4f93b3f8915b02fae951"&gt; list of connectors for Power Apps&lt;/A&gt; to see all the APIs that are available. Notice &lt;A href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-powerautomate-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="e1fb307473f54ee386f290577633f8dc"&gt;Power Automate&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-logicapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="b78c6ee33cfb420ea048aa6d91d0dba4"&gt;Logic App connectors&lt;/A&gt; might not be the same.&lt;/EM&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBpcyUyMEFJJTIwQnVpbGRlciUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;&lt;SPAN style="color: inherit; font-size: 18px;"&gt;What is AI Builder?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBpcyUyMEFJJTIwQnVpbGRlciUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;A href="https://powerapps.microsoft.com/en-us/ai-builder/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt;&lt;/A&gt; is one of the additional features of Power Apps. With AI Builder, you can &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;add intelligence to your apps&lt;/STRONG&gt; even if you have no coding or data science skills. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aiBuilderAppView.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231028i7F000B1F376D8511/image-size/large?v=v2&amp;amp;px=999" role="button" title="aiBuilderAppView.png" alt="aiBuilderAppView.png" /&gt;&lt;/span&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;&lt;SPAN&gt;What are some of the use cases for AI Builder?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="c2879a1fc7c24de08011e12588d72701"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="a8ea00f62195431daf264e3a15f6839f"&gt;You can use pre-trained models to:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="c2879a1fc7c24de08011e12588d72701"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMllvdSUyMGNhbiUyMHVzZSUyMHByZS10cmFpbmVkJTIwbW9kZWxzJTIwdG8lM0ElMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtdW5vcmRlcmVkJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyRGV0ZWN0JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyb2JqZWN0cyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwZnJvbSUyMGltYWdlcyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFuYWx5emUlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMHlvdXIlMjBjdXN0b21lcnMlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyc2VudGltZW50JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBmcm9tJTIwZmVlZGJhY2slMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtaXRlbSUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJEZXRlY3QlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIya2V5d29yZHMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGZyb20lMjB0ZXh0JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyRXh0cmFjdCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnNwZWNpZmljJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyaW5mb3JtYXRpb24lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFib3V0JTIweW91ciUyMGJ1c2luZXNzJTIwZnJvbSUyMHRleHQlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="a5f03958d358480e94bab65fb99349ec"&gt;
&lt;UL&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2fa8cc24b74e404abc5686de2a86b58e"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Detect&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;objects&lt;/STRONG&gt; from images&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Analyze&lt;/STRONG&gt; your customers'&amp;nbsp;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;sentiment&lt;/STRONG&gt; from feedback&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;Detect &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;keywords&lt;/STRONG&gt; from text&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Extract&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;specific&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;information&lt;/STRONG&gt; about your business from text&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Is AI Builder the right choice?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="4ab717d03d9f48b09f4d0045fc4c6cea"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="af4ac214f4014e6bb30d34b4e7c20133"&gt;Great question! There are so &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;many tools&lt;/STRONG&gt; out there and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;many ways to do the same thing&lt;/STRONG&gt;. How do you know which one is the right solution before investing time and effort?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdyZWF0JTIwcXVlc3Rpb24hJTIwVGhlcmUlMjBhcmUlMjBzbyUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJtYW55JTIwdG9vbHMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMG91dCUyMHRoZXJlJTIwYW5kJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMm1hbnklMjB3YXlzJTIwdG8lMjBkbyUyMHRoZSUyMHNhbWUlMjB0aGluZyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyLiUyMEhvdyUyMGRvJTIweW91JTIwa25vdyUyMHdoaWNoJTIwb25lJTIwaXMlMjB0aGUlMjByaWdodCUyMHNvbHV0aW9uJTIwYmVmb3JlJTIwaW52ZXN0aW5nJTIwdGltZSUyMGFuZCUyMGVmZm9ydCUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJJJTIwaGF2ZSUyMGElMjBydWxlJTIwb2YlMjB0aHVtYiUyMHdoZW4lMjBJJTIwd2FudCUyMHRvJTIwYnVpbGQlMjBzb21ldGhpbmclMkMlMjB1c2UlMjB3aGF0ZXZlciUyMGlzJTIwYXZhaWxhYmxlJTIwYW5kJTIwZWFzeSUyMHRvJTIwdXNlJTIwZmlyc3QuJTIwV2hlbiUyMHlvdXIlMjBuZWVkcyUyMGV4Y2VlZCUyMHdoYXQlMjB0aGUlMjB0b29sJTIweW91JTIwYXJlJTIwdXNpbmclMjBjb3ZlcnMlMkMlMjBsb29rJTIwaW50byUyMGFub3RoZXIlMjBzb2x1dGlvbiUyMG9yJTIwYnVpbGRpbmclMjBpdCUyMHlvdXJzZWxmLiUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="8d1e08bc89f440adbb979587bf0c0a51"&gt;I have a rule of thumb when I want to build something, use whatever is available and easy to use first. When your needs exceed what the tool you are using covers, look into another solution or building it yourself.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c lia-indent-padding-left-30px" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;EM&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN style="font-family: inherit;"&gt;Use the tool &lt;/SPAN&gt;&lt;STRONG class="bold-3c254bd9" style="font-family: inherit;" data-slate-leaf="true"&gt;easiest &lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt;to get started when you are building your idea. When your &lt;/SPAN&gt;&lt;STRONG class="bold-3c254bd9" style="font-family: inherit;" data-slate-leaf="true"&gt;needs exceed the capabilities&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; of the tool you are using, find a solution that enables you. Don't invest in building things from scratch before you know it is worth it to do so.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="8d1e08bc89f440adbb979587bf0c0a51"&gt;For example, if you have an app idea, it is better to have a prototype running as easily as possible. You can test your ideas before investing your time into building custom designed UI or features. In our specific case, you can first prototype your app with the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;drag and drop UI&lt;/STRONG&gt; of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Apps&lt;/STRONG&gt; and using &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;prebuilt AI models&lt;/STRONG&gt;. When your specific needs surface, such as recognizing a particular object or keyword, you can invest your time into creating your custom models to train for the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;object&lt;/STRONG&gt; or &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;keyword detection&lt;/STRONG&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Can I use Power Apps and AI Builder for production?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Yes you can. As any tool that does things magically, AI Builder in Power Apps comes with a cost. That does not mean you can't &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="26639428a6bf4db5831acbfbac43d0ed"&gt;try your ideas out for free&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;What will my production app cost?&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;If you want to go to production with Power Apps, it is a good idea to consider the costs. Thankfully there is an app for that.&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="0fc677a6343a46a2987876cc6c3edba5"&gt; AI Builder Calculator&lt;/A&gt; lets you input what &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI tools you will need&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;how many users&lt;/STRONG&gt; will be accessing your app's AI features and gives you the price it will cost you. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aiBuilderCalculate.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231033iD3C383D708D37493/image-size/large?v=v2&amp;amp;px=999" role="button" title="aiBuilderCalculate.png" alt="aiBuilderCalculate.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;What are preview features?&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="282b4d7225064fad9f71fc0a55cbf20d"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="b6129f9acc9b497bb8e96cd0b8813cba"&gt;AI Builder was released for &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;public preview&lt;/STRONG&gt; on June 10, 2019 in Europe and the United States. Preview release features are subject to change and may have restricted functionality before the official release for general availability. Preview releases are not meant for production use. You can try them out and influence the final product by giving feedback. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="282b4d7225064fad9f71fc0a55cbf20d"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyMHdhcyUyMHJlbGVhc2VkJTIwZm9yJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnB1YmxpYyUyMHByZXZpZXclMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMG9uJTIwSnVuZSUyMDEwJTJDJTIwMjAxOSUyMGluJTIwRXVyb3BlJTIwYW5kJTIwdGhlJTIwVW5pdGVkJTIwU3RhdGVzLiUyMFByZXZpZXclMjByZWxlYXNlJTIwZmVhdHVyZXMlMjBhcmUlMjBzdWJqZWN0JTIwdG8lMjBjaGFuZ2UlMjBhbmQlMjBtYXklMjBoYXZlJTIwcmVzdHJpY3RlZCUyMGZ1bmN0aW9uYWxpdHklMjBiZWZvcmUlMjB0aGUlMjBvZmZpY2lhbCUyMHJlbGVhc2UlMjBmb3IlMjBnZW5lcmFsJTIwYXZhaWxhYmlsaXR5LiUyMFByZXZpZXclMjByZWxlYXNlcyUyMGFyZSUyMG5vdCUyMG1lYW50JTIwZm9yJTIwcHJvZHVjdGlvbiUyMHVzZS4lMjBZb3UlMjBjYW4lMjB0cnklMjB0aGVtJTIwb3V0JTIwYW5kJTIwaW5mbHVlbmNlJTIwdGhlJTIwZmluYWwlMjBwcm9kdWN0JTIwYnklMjBnaXZpbmclMjBmZWVkYmFjay4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVGhlJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdlbmVyYWwlMjBBdmFpbGFiaWxpdHklMjAoR0ElMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiklMjByZWxlYXNlJTIwd2lsbCUyMG9jY3VyJTIwaW4lMjBhJTIwcGhhc2VkJTIwbWFubmVyJTJDJTIwd2l0aCUyMHNvbWUlMjBmZWF0dXJlcyUyMHJlbWFpbmluZyUyMGluJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnByZXZpZXclMjBzdGF0dXMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMHdoaWxlJTIwb3RoZXJzJTIwYXJlJTIwcmVsZWFzZWQlMjBmb3IlMjBHQS4lMjBZb3UlMjBjYW4lMjBjaGVjayUyMG91dCUyMHRoZSUyMHJlbGVhc2UlMjBzdGF0dXMlMjBvbiUyMHRoZSUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZkb2NzLm1pY3Jvc29mdC5jb20lMkZhaS1idWlsZGVyJTJGb3ZlcnZpZXclM0ZXVC5tY19pZCUzRGFpbWwtODQzOC1heXlvbmV0JTIzcmVsZWFzZS1zdGF0dXMlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyMGRvY3VtZW50YXRpb24lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="25042651bf5e4732917ab56ded74ec45"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="21eee968ef634d4292f8107c179a91ad"&gt;The &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;General Availability (GA&lt;/STRONG&gt;) release will occur in a phased manner, with some features remaining in &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;preview status&lt;/STRONG&gt; while others are released for GA. You can check out the release status on the &lt;/SPAN&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/ai-builder/overview?WT.mc_id=aiml-8438-ayyonet#release-status" target="_blank" rel="noopener noreferrer" data-key="3691d834fde7487c9fe97b9d9ef22edb"&gt;&lt;SPAN data-key="fc4c858b230c45b69dec2bc9afbee09b"&gt;AI Builder documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-key="608b72be07c347a886bc97a8450c1018"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AIBuilderPreview.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231034iA92C2930BDEC2F40/image-size/large?v=v2&amp;amp;px=999" role="button" title="AIBuilderPreview.png" alt="AIBuilderPreview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is Object Detection?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:0"&gt;AI Builder Object detection is an AI model that you can train to &lt;/SPAN&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;detect objects in pictures&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyME9iamVjdCUyMGRldGVjdGlvbiUyMGlzJTIwYW4lMjBBSSUyMG1vZGVsJTIwdGhhdCUyMHlvdSUyMGNhbiUyMHRyYWluJTIwdG8lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyZGV0ZWN0JTIwb2JqZWN0cyUyMGluJTIwcGljdHVyZXMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjBBSSUyMG1vZGVscyUyMHVzdWFsbHklMjByZXF1aXJlJTIwdGhhdCUyMHlvdSUyMHByb3ZpZGUlMjBzYW1wbGVzJTIwb2YlMjBkYXRhJTIwdG8lMjB0cmFpbiUyMGJlZm9yZSUyMHlvdSUyMGFyZSUyMGFibGUlMjB0byUyMHBlcmZvcm0lMjBwcmVkaWN0aW9ucy4lMjBQcmVidWlsdCUyMG1vZGVscyUyMGFyZSUyMHByZS10cmFpbmVkJTIwYnklMjB1c2luZyUyMGElMjBzZXQlMjBvZiUyMHNhbXBsZXMlMjB0aGF0JTIwYXJlJTIwcHJvdmlkZWQlMjBieSUyME1pY3Jvc29mdCUyQyUyMHNvJTIwdGhleSUyMGFyZSUyMGluc3RhbnRseSUyMHJlYWR5JTIwdG8lMjBiZSUyMHVzZWQlMjBpbiUyMHByZWRpY3Rpb25zLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;. AI models usually require that you provide samples of data to train before you are able to perform predictions. Prebuilt models are pre-trained by using a set of samples that are provided by Microsoft, so they are instantly ready to be used in predictions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="testResultSmall.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231042i6EA424CD5D2B5421/image-size/large?v=v2&amp;amp;px=999" role="button" title="testResultSmall.gif" alt="testResultSmall.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:0"&gt;Object detection can detect up to &lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;500 different objects in a single model &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:2"&gt;and support &lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:3"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;JPG, PNG, BMP&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:4"&gt; image format or photos through the Power Apps control.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;How to try out Object Detection capabilities?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;You can try out and see how object detection works before having to create and accounts or apps yourself on the &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/services/cognitive-services/computer-vision/?WT.mc_id=aiml-8438-ayyonet#features" target="_blank" rel="noopener noreferrer" data-key="28c652af57a946cf80901e8246cfba85"&gt;Azure Computer Vision&lt;/A&gt; page. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="seeItinAction.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231036i7355A9B43AC35791/image-size/large?v=v2&amp;amp;px=999" role="button" title="seeItinAction.png" alt="seeItinAction.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;What can you do with Object Detection?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="942f2107de534f7987d5544f4e90875d"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="fe0837516e874bd2a1ac394fdb918bd3"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="52d8d59bb34246ea9f8faf55151d0dfe"&gt;Object counting and inventory management&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="b1e48731c83948a9959b6dc04d255cb3"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="ba5658e8ddca4d31b65def27aad2274b"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="dab5f0111f004d6a9afdcd016c61c4ef"&gt;Brand logo recognition&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class="" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9iamVjdCUyMGNvdW50aW5nJTIwYW5kJTIwaW52ZW50b3J5JTIwbWFuYWdlbWVudCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkJyYW5kJTIwbG9nbyUyMHJlY29nbml0aW9uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyV2lsZGxpZmUlMjBhbmltYWwlMjByZWNvZ25pdGlvbiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="1f91131820f14e8a908d6f49debcfa89"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="3df08021211e434c96111e6e836d4e8c"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="56629990c4774544909bcedf7b6ac3de"&gt;Wildlife animal recognition&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;How to detect objects from images?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-key="0f7b3a2f2fe2482995c5e193b720e62d"&gt;To start creating your AI model for your app, sign in to&amp;nbsp;&lt;/SPAN&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer noopener noreferrer" data-key="16e04399c06e47448c70fae4072865bd"&gt;&lt;SPAN data-key="e1b05274eca44190a34443d34fc8dcb5"&gt;Power Apps&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&amp;nbsp;and click on AI Builder on the left hand menu. Select Object Detection from the "Refine Model for your business needs" option.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="buildAI.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231037i819A81900F35C690/image-size/large?v=v2&amp;amp;px=999" role="button" title="buildAI.png" alt="buildAI.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&amp;nbsp;Name your new AI model with a unique name. Select Common Objects and proceed to next section.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="commonObj.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231038i7060E000CF9AE3BE/image-size/large?v=v2&amp;amp;px=999" role="button" title="commonObj.png" alt="commonObj.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Name the objects that you are going to detect. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="namedObjects.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231039iA728BDAEF57A2F11/image-size/large?v=v2&amp;amp;px=999" role="button" title="namedObjects.png" alt="namedObjects.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:0"&gt;Upload images that contain the object you will detect. To start with you can upload &lt;/SPAN&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;15 images for each object&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="imageDetectionFormat.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231040iD670F3B5FFCDAA51/image-size/medium?v=v2&amp;amp;px=400" role="button" title="imageDetectionFormat.png" alt="imageDetectionFormat.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Make sure each object has approximately the same amount of images tagged. If you have more examples of one object, the training data will be likely to detect that object when it is not.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Tag your objects by selecting a square that your object is in and choosing the name of the object. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="tagging.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231043i3FB7A1BFE1C6E937/image-size/large?v=v2&amp;amp;px=999" role="button" title="tagging.png" alt="tagging.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Once you are done, choose Done Tagging and Train. Training process will take some time.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;If you choose to not use an image or clear any tags, you can do that at any time by going back to your &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;model &lt;/STRONG&gt;under the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt; on the left hand side menu and choose your model and choose edit.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dontUseImage.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231044iE5EF548261A710DB/image-size/large?v=v2&amp;amp;px=999" role="button" title="dontUseImage.png" alt="dontUseImage.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;AI Builder will give you a Performance score over 100 and a way to quickly test your model before publishing. You can edit your models and retrain to improve your performance. Next section will give you some best practices to improve your performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="performance.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231045i2E73E8C4D5C4E3E9/image-size/large?v=v2&amp;amp;px=999" role="button" title="performance.png" alt="performance.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to Improve Your Custom Model Performance?&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="f561f804df1241b192b66665ca8c0ceb"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="f9aa28c6644f4a46a5f388e4ba7621e1"&gt;Getting the best model performance for your business can be an iterative process. Results can vary depending on the customizations you make to the model, and the training data you provide.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdldHRpbmclMjB0aGUlMjBiZXN0JTIwbW9kZWwlMjBwZXJmb3JtYW5jZSUyMGZvciUyMHlvdXIlMjBidXNpbmVzcyUyMGNhbiUyMGJlJTIwYSUyMHJhdGhlciUyMGl0ZXJhdGl2ZSUyMHByb2Nlc3MuJTIwUmVzdWx0cyUyMGNhbiUyMHZhcnklMjBkZXBlbmRpbmclMjBvbiUyMHRoZSUyMGN1c3RvbWl6YXRpb25zJTIweW91JTIwbWFrZSUyMHRvJTIwdGhlJTIwbW9kZWwlMkMlMjBhbmQlMjB0aGUlMjB0cmFpbmluZyUyMGRhdGElMjB5b3UlMjBwcm92aWRlLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJUbyUyMGhlbHAlMjBmYWNpbGl0YXRlJTIwdGhpcyUyMHByb2Nlc3MlMkMlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyQUklMjBCdWlsZGVyJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbGxvd3MlMjB5b3UlMjB0byUyMGhhdmUlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIybXVsdGlwbGUlMjB2ZXJzaW9ucyUyMG9mJTIweW91ciUyMG1vZGVsJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBzbyUyMHlvdSUyMGNhbiUyMHVzZSUyMHlvdXIlMjBtb2RlbCUyMGFuZCUyMGNvbnRpbnVlJTIwdG8lMjBpbXByb3ZlJTIwaXQlMjBhdCUyMHRoZSUyMHNhbWUlMjB0aW1lLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;To help facilitate this process, &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt; allows you to have &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;multiple versions of your model&lt;/STRONG&gt; so you can use your model and continue to improve it at the same time.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;What are some best practices for training for object detection?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Use diverse images &lt;/STRONG&gt;to train with all possible use cases. For example if you are training your data to detect a VR headset, use images of the headset used in different environments as well as the out of the box images. If you only train with images with people wearing the headset, your model would not recognize images of the same device when it is in its box.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PXL_20201007_121129483.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231047i4D7AEF341316FAAF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PXL_20201007_121129483.jpg" alt="PXL_20201007_121129483.jpg" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="-1x-1.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231048iF96BD3B8915D96AC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="-1x-1.jpg" alt="-1x-1.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;Use images with a variety of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;backgrounds.&lt;/STRONG&gt; Photos in context are better than photos in front of neutral backgrounds.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PXL_20201007_121045280.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231049iA5D13D928462B5BC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PXL_20201007_121045280.jpg" alt="PXL_20201007_121045280.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;Use training images that have different &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;lighting&lt;/STRONG&gt;. For example, include images taken with flash, high exposure, and so on.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231051iC52BAA5292ABDC25/image-size/medium?v=v2&amp;amp;px=400" role="button" title="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" alt="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="4861a2c304614e2e87151e42a5d72ffe"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="70b94925cab840db867c5525a6655c6b"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="00c3fe4ff62f4a1ab07f9e8074c80fb5"&gt;Use images of objects in varied sizes. Different sizing helps the model generalize better.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class="" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVzZSUyMGltYWdlcyUyMG9mJTIwb2JqZWN0cyUyMGluJTIwdmFyaWVkJTIwc2l6ZXMuJTIwRGlmZmVyZW50JTIwc2l6aW5nJTIwaGVscHMlMjB0aGUlMjBtb2RlbCUyMGdlbmVyYWxpemUlMjBiZXR0ZXIuJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwaW1hZ2VzJTIwdGFrZW4lMjBmcm9tJTIwZGlmZmVyZW50JTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMmFuZ2xlcyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyLiUyMElmJTIwYWxsJTIweW91ciUyMHBob3RvcyUyMGFyZSUyMGZyb20lMjBhJTIwc2V0JTIwb2YlMjBmaXhlZCUyMGNhbWVyYXMlMjBzdWNoJTIwYXMlMjBzdXJ2ZWlsbGFuY2UlMjBjYW1lcmFzJTJDJTIwYXNzaWduJTIwYSUyMGRpZmZlcmVudCUyMGxhYmVsJTIwdG8lMjBlYWNoJTIwY2FtZXJhLiUyMFRoaXMlMjBjYW4lMjBoZWxwJTIwYXZvaWQlMjBtb2RlbGluZyUyMHVucmVsYXRlZCUyMG9iamVjdHMlMjBzdWNoJTIwYXMlMjBsYW1wcG9zdHMlMjBhcyUyMHRoZSUyMGtleSUyMGZlYXR1cmUuJTIwQXNzaWduJTIwY2FtZXJhJTIwbGFiZWxzJTIwZXZlbiUyMGlmJTIwdGhlJTIwY2FtZXJhcyUyMGNhcHR1cmUlMjB0aGUlMjBzYW1lJTIwb2JqZWN0cy4lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="be0afeed7b3b41f3a42bc5252b58b860"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="622c24691f184b96ba7272b522f0e3ab"&gt;Use images taken from different &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;angles&lt;/STRONG&gt;. If all your photos are from a set of fixed cameras such as surveillance cameras, assign a different label to each camera. This can help avoid modeling unrelated objects such as lampposts as the key feature. Assign camera labels even if the cameras capture the same objects.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" style="width: 300px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231052i3F1E5736CAB35567/image-size/medium?v=v2&amp;amp;px=400" role="button" title="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" alt="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;How to share your models?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="619d4823bb21470db9d121082e6bd572"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2b4cedbec4df45a08c9e9ef5eddc9ef6"&gt;By default, only you can see the models you create and publish. This feature allows you to test them and use them within apps and flows without exposing them.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="619d4823bb21470db9d121082e6bd572"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkJ5JTIwZGVmYXVsdCUyQyUyMG9ubHklMjB5b3UlMjBjYW4lMjBzZWUlMjB0aGUlMjBtb2RlbHMlMjB5b3UlMjBjcmVhdGUlMjBhbmQlMjBwdWJsaXNoLiUyMFRoaXMlMjBmZWF0dXJlJTIwYWxsb3dzJTIweW91JTIwdG8lMjB0ZXN0JTIwdGhlbSUyMGFuZCUyMHVzZSUyMHRoZW0lMjB3aXRoaW4lMjBhcHBzJTIwYW5kJTIwZmxvd3MlMjB3aXRob3V0JTIwZXhwb3NpbmclMjB0aGVtLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJJZiUyMHlvdSUyMHdhbnQlMjBvdGhlcnMlMjB0byUyMHVzZSUyMHlvdXIlMjBtb2RlbCUyQyUyMHlvdSUyMGNhbiUyMHNoYXJlJTIwaXQlMjB3aXRoJTIwc3BlY2lmaWMlMjB1c2VycyUyQyUyMGdyb3VwcyUyQyUyMG9yJTIweW91ciUyMHdob2xlJTIwb3JnYW5pemF0aW9uLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="90a12d0f0e6a4f378391d07edb57dfaf"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="c739c2ab03294129861e2687f7e9639f"&gt;If you want others to use your model, you can share it with specific users, groups, or your whole organization.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="90a12d0f0e6a4f378391d07edb57dfaf"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;How to use your Custom Vision model in a Power App?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:0"&gt;Once you are happy with your model's performance, you can add it to a new app by choosing &lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Use model&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:2"&gt; and &lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;New app.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="createPApp.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231053iF0E300B00A9520DE/image-size/large?v=v2&amp;amp;px=999" role="button" title="createPApp.png" alt="createPApp.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;You will be redirected to Power App editor and an Object Detection component that uses your model will be added automatically. In the editor, you can add new pages to navigate, design and customize your pages.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerAppEditor.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231054iA6A607B2EC5B9924/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerAppEditor.png" alt="powerAppEditor.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Once you are happy with the design, you can publish and share your app. You can use your new app by downloading Power Apps from &lt;A href="https://apps.apple.com/us/app/power-apps/id1047318566" target="_blank" rel="noopener"&gt;Apple&lt;/A&gt;, &lt;A href="https://play.google.com/store/apps/details?id=com.microsoft.msapps&amp;amp;hl=en_US&amp;amp;gl=US" target="_blank" rel="noopener"&gt;Android&lt;/A&gt; or &lt;A href="https://www.microsoft.com/en-us/p/power-apps/9nblggh5z8f3?ocid=9nblggh5z8f3_ORSEARCH_Bing&amp;amp;rtc=1#activetab=pivot:overviewtab" target="_blank" rel="noopener"&gt;Microsoft&lt;/A&gt; stores. Once you sign in, your app will be listed in the mobile Power Apps.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerAppsPlayStore.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231058iE1472CF4D467818A/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerAppsPlayStore.png" alt="powerAppsPlayStore.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;What's next?&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;Now you have your app's prototype, you can add more features, get feedback and test your app.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;Should I keep using my power app or rebuild it?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="423db96917dd45e9a50460b43519e456"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="e45e3f2f197f4dc7b47b2aadc4bc6b87"&gt;When your needs change, you can consider refactoring your application to a serverless backend and a custom built UI. If the app is working fine for you and your users, you can continue using and improving overtime using Power Apps. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="42bacddf1fa04516afd7deca73c1d59d"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="c7685faf4b174896a84696db7cc274b5"&gt;What would be the changes that require the upgrade? There are two possibilities for the changed requirements for your app:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="42bacddf1fa04516afd7deca73c1d59d"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoZW4lMjB5b3VyJTIwbmVlZHMlMjBjaGFuZ2UlMkMlMjB5b3UlMjBjYW4lMjBjb25zaWRlciUyMHJlZmFjdG9yaW5nJTIweW91ciUyMGFwcGxpY2F0aW9uJTIwdG8lMjBhJTIwc2VydmVybGVzcyUyMGJhY2tlbmQlMjBhbmQlMjBhJTIwY3VzdG9tJTIwYnVpbHQlMjBVSS4lMjBJZiUyMHRoZSUyMGFwcCUyMGlzJTIwd29ya2luZyUyMGZpbmUlMjBmb3IlMjB5b3UlMjBhbmQlMjB5b3VyJTIwdXNlcnMlMkMlMjB5b3UlMjBjYW4lMjBjb250aW51ZSUyMHVzaW5nJTIwYW5kJTIwaW1wcm92aW5nJTIwb3ZlcnRpbWUlMjB1c2luZyUyMFBvd2VyJTIwQXBwcy4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyV2hhdCUyMHdvdWxkJTIwYmUlMjB0aGUlMjBjaGFuZ2VzJTIwdGhhdCUyMHJlcXVpcmVzJTIwdGhlJTIwdXBncmFkZSUzRiUyMFRoZXJlJTIwYXJlJTIwdHdvJTIwcG9zc2liaWxpdGllcyUyMGZvciUyMHRoZSUyMGNoYW5nZWQlMjByZXF1aXJlbWVudHMlMjBmb3IlMjB5b3VyJTIwYXBwJTNBJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkZlYXR1cmUlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtaXRlbSUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJCdWRnZXQlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="304bf6290945455ea4640fe42ac13db9"&gt;
&lt;UL&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="48b71c16a4a446c58f46f443686efc84"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="f2209c25d08b4d69998637fe161cca0a"&gt;Feature&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="48b71c16a4a446c58f46f443686efc84"&gt;Budget&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to create a custom feature for Power Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Ready made tools are always limited to the features the product team decides to include. If you are writing custom code, you can add any feature that you need. Thankfully, for the features that are not implemented yet, it is always possible to author a custom connector that you can use with or without Power Apps.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;A &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;connector&lt;/STRONG&gt; is a &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;proxy&lt;/STRONG&gt; or a &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;wrapper around an API&lt;/STRONG&gt; that allows the underlying service to talk to Microsoft &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Automate&lt;/STRONG&gt;, &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Microsoft Power Apps,&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Azure Logic Apps&lt;/STRONG&gt;. It provides a way for users to connect their accounts and leverage a set of pre-built actions and triggers to build their apps and workflows.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Check out the list of &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="ba9ff40420a549ea8bdbe38ca764c77d"&gt;Power Apps Connectors &lt;/A&gt;and &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/custom-connectors/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="f4ad6e9f054041af9df382a4a9168fdb"&gt;how to build a custom connector&lt;/A&gt; yourself. &lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to compare costs for Power Apps and Logic Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Once you start using your app, you will have a better idea about the number of users accessing AI capabilities and the number of images that you need to train. You can use &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/en-us/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="158bda7278024faf9a587fbfefa74db4"&gt;AI Builder Cost Calculator&lt;/A&gt; and &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/pricing/details/logic-apps/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="09f2ab90b2a847f38d49ef5619611a01"&gt;Logic App Cost Calculator&lt;/A&gt; to compare options. You can check any other service price through &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/pricing/calculator/?service=logic-apps&amp;amp;WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="7c4dcbf2795040949ad61f72abd9e3b8"&gt;Azure Product Cost Calculator&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;Additional Resources&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;Artificial Intelligence for Developers&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;&lt;A title="Cognitive Services Overview" href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Cognitive Services Overview&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="940d8a03438c4049bbdd740cfc6335bd" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmlubGluZSUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTIyaHJlZiUyMiUzQSUyMmh0dHBzJTNBJTJGJTJGZG9jcy5taWNyb3NvZnQuY29tJTJGcG93ZXJhcHBzJTJGbWFrZXIlMkZzaWdudXAtZm9yLXBvd2VyYXBwcyUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlBvd2VyJTIwQXBwcyUyMEZyZWUlMjBUcmlhbCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;Power Apps Free Trial&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="reset-3c756112--listItemContent-756c9114" data-key="3d4576f9d943425f82fcbff6d504d775"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="d69ae82c8f9f4b0bbec6bb4785ab1c81"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/power-platform/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="e2d8bac36dcf4340bf5afdb6e5918a95"&gt;&lt;SPAN data-key="3db9d9eaafd7472983375cc5e7f4a680"&gt;Power Platform Documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="fbef39a01f05416db4b9c1b50a66d2a1"&gt;&lt;SPAN data-key="a1661c95f8a141cfb7468f7409daf18c"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/en-us/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="27da6edf246a4d4b9759b554bce97421" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZkb2NzLm1pY3Jvc29mdC5jb20lMkZlbi11cyUyRmNvbm5lY3RvcnMlMkZjb25uZWN0b3ItcmVmZXJlbmNlJTJGY29ubmVjdG9yLXJlZmVyZW5jZS1wb3dlcmFwcHMtY29ubmVjdG9ycyUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkxpc3QlMjBvZiUyMFBvd2VyJTIwQXBwcyUyMENvbm5lY3RvcnMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;List of Power Apps Connectors&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" style="background-color: #ffffff;" href="https://docs.microsoft.com/ai-builder/overview?WT.mc_id=aiml-8438-ayyonet#release-status" target="_blank" rel="noopener noreferrer" data-key="036857cd5d28471398af1b19c7338c24"&gt;AI Builder Release Status&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" style="font-family: inherit; background-color: #ffffff;" href="https://powerapps.microsoft.com/en-us/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="bc7e432b04814d829ee3f48d4366c534"&gt;&lt;SPAN data-key="6ece649bd56c4515a8ee954050659dc8"&gt;AI Builder Cost Calculator&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-key="6ece649bd56c4515a8ee954050659dc8"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/?WT.mc_id=aiml-8438-ayyonet#features" target="_blank" rel="noopener noreferrer" data-key="f2f54aa52cb34698a5d112d487f23134" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZhenVyZS5taWNyb3NvZnQuY29tJTJGZW4tdXMlMkZzZXJ2aWNlcyUyRmNvZ25pdGl2ZS1zZXJ2aWNlcyUyRmNvbXB1dGVyLXZpc2lvbiUyRiUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjNmZWF0dXJlcyUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyQ29tcHV0ZXIlMjBWaXNpb24lMjBPdmVydmlldyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Computer Vision Overview&lt;/A&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;Leave a comment below for your AI application use cases and the tutorials you would like to see.&lt;/EM&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 15 Mar 2021 21:14:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-prototyping-a-no-code-solution-with-power-apps/ba-p/2189550</guid>
      <dc:creator>Yonet</dc:creator>
      <dc:date>2021-03-15T21:14:46Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2195501#M189</link>
      <description>&lt;P&gt;&lt;LI-USER uid="590027"&gt;&lt;/LI-USER&gt;&amp;nbsp;would it be possible to add the ability to toggle displaying of the short answer, possibly via an Application Setting in the bot's App Service? Showing both short and long answers would be extremely confusing for end users.&lt;/P&gt;</description>
      <pubDate>Tue, 09 Mar 2021 09:47:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2195501#M189</guid>
      <dc:creator>julianportelli</dc:creator>
      <dc:date>2021-03-09T09:47:32Z</dc:date>
    </item>
    <item>
      <title>Mask detection now available in preview via Azure Cognitive Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/mask-detection-now-available-in-preview-via-azure-cognitive/ba-p/2194157</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;COVID-19 spread has changed our day-to-day life in unprecedented ways. Organizations around the world are taking action to, contain and help prevent further spread of the disease by using AI technologies like computer vision, help ensure the safety of the employees and customers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Cognitive Services now, provides Mask detection functionality, to assist application developers in building solutions that can help monitor and contain the spread. Mask detection can be deployed anywhere, from the cloud leveraging Face service, to the edge using Spatial analysis service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Mask detection on the edge&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge" target="_blank" rel="noopener"&gt;Spatial analysis&lt;/A&gt; is, a capability of &lt;FONT size="3"&gt;Computer&lt;/FONT&gt; Vision, part of Azure Cognitive Services. This capability understands people’s movements in a physical space by analyzing real-time video, significantly increasing efficiency, and providing valuable insights for enabling various scenarios including,&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Counting people in a space for maximum occupancy&lt;/LI&gt;
&lt;LI&gt;Understanding the distance between people for social distancing measures&lt;/LI&gt;
&lt;LI&gt;Determining customer footfall such as in retail spaces&lt;/LI&gt;
&lt;LI&gt;Determining wait time in a checkout line&lt;/LI&gt;
&lt;LI&gt;Determining trespassing in protected areas&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Spatial analysis can now detect whether a person is wearing a protective face covering or not. With this new capability, businesses can leverage insights to build applications that can measure safety and enhance compliance. For example, a business can aggregate data of percentage of people wearing masks in a physical space to improve compliance measures. To help ensure the safety of people working in a given space, &lt;SPAN&gt;mask detection can also be used to notify when a person may accidentally enter the space without a face mask.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Mask detection can be enabled for the following spatial analysis operations - &lt;EM&gt;personcount&lt;/EM&gt;, &lt;EM&gt;personcrossingline&lt;/EM&gt; and &lt;EM&gt;personcrossingpolygon&lt;/EM&gt;. The classifier model can be enabled by configuring the ‘ENABLE_FACE_MASK_CLASSIFIER’ parameter to True, this is disabled by default. The attributes, &lt;EM&gt;face_mask&lt;/EM&gt; or &lt;EM&gt;face_noMask&lt;/EM&gt;, will be returned as metadata with confidence score for each person detected in the video stream.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Spatial_analysis.jpg" style="width: 567px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261767iFB9D37F51D3F21F1/image-dimensions/567x355?v=v2" width="567" height="355" role="button" title="Spatial_analysis.jpg" alt="Face mask and Person detection with Spatial analysis" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Face mask and Person detection with Spatial analysis&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Spatial analysis operations provide a real-time video analysis pipeline on new and existing RTSP cameras. The deployment of the spatial analysis container on edge devices is facilitated by Azure IoT Hub. When video is streamed and processed by spatial analysis, the container emits AI insight events about people’s movement which in turn are sent to Azure IoT Hub as IoT telemetry. From IoT Hub you can create various routes to other Azure services and build your business solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="spatial_analysis_container.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261768i608FB98277291AFC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="spatial_analysis_container.png" alt="Spatial analysis container deployment with Azure IoT" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Spatial analysis container deployment with Azure IoT&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;The events from each operation are egressed to Azure IoT Hub on JSON format. Sample JSON for an event output by&lt;EM&gt; cognitiveservices.vision.spatialanalysis-personcount&lt;/EM&gt; operation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "events": [
        {
            "id": "3733eb36935e4d73800a9cf36185d5a2",
            "type": "personLineEvent",
            "detectionIds": [
                "90d55bfc64c54bfd98226697ad8445ca"
            ],
            "properties": {
                "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
                "status": "CrossLeft"
            },
            "zone": "doorcamera"
        }
    ],
    "sourceInfo": {
        "id": "camera_id",
        "timestamp": "2020-08-24T06:06:53.261Z",
        "width": 608,
        "height": 342,
        "frameId": "1340",
        "imagePath": ""
    },
    "detections": [
        {
            "type": "person",
            "id": "90d55bfc64c54bfd98226697ad8445ca",
            "region": {
                "type": "RECTANGLE",
                "points": [
                    {
                        "x": 0.491627341822574,
                        "y": 0.2385801348769874
                    },
                    {
                        "x": 0.588894994635331,
                        "y": 0.6395559924387793
                    }
                ]
            },
            "confidence": 0.9005028605461121,
            "metadata": {
	        "attributes": {
	            "face_Mask": 0.99
	        }
	    }
        }
    ],
    "schemaVersion": "1.0"
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn how to build business applications with spatial analysis, follow these &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_blank" rel="noopener"&gt;instructions&lt;/A&gt; to deploy a sample Azure Web Application that presents a live view of people counting events in a physical space. You can modify this app with other spatial analysis operations and make modifications based on the event output of the container.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Mask detection in the cloud&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;Mask detection is also available through the Face Detection cloud endpoint in Azure Cognitive&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/face/" target="_blank" rel="noopener"&gt;Face API&lt;/A&gt;&amp;nbsp;Service. This capability analyses images, detects one or more human faces along with attributes for each face in the image. Face mask attribute is available with the latest detection_03 model, along with additional attribute &lt;EM&gt;“noseAndMouthCovered” &lt;/EM&gt;that provides insight about whether the mask covers both the nose and mouth.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To leverage the latest mask detection capability, users need to specify the detection model in the API request - assign the model version with the detectionModel&amp;nbsp;parameter to detection_03. Refer to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model" target="_blank" rel="noopener"&gt;How to specify a detection model&lt;/A&gt; to learn more about the capabilities of each detection model and sample code to call it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="facemask.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261770i6009C7B79E61C4C0/image-size/large?v=v2&amp;amp;px=999" role="button" title="facemask.png" alt="Face mask detection with Face Service" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Face mask detection with Face Service&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Detection_03 API response with face mask attribute:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;  {
    "faceId": "eee58bd3-0b54-4f48-9a96-c9c60724ee80",
    "faceRectangle": {
      "top": 171,
      "left": 1212,
      "width": 79,
      "height": 125
    },
    "faceAttributes": {
      "mask": {
        "type": "faceMask",
        "noseAndMouthCovered": “true”
      }
  },
  {                         
   "faceId": "2d83c3c1-7266-4b84-b47b-a65645368021",
    "faceRectangle": {
      "top": 364,
      "left": 600,
      "width": 66,
      "height": 80
    },
    "faceAttributes": {
      "mask": {
        "type": "faceMask",
        "noseAndMouthCovered": “true”
     }
  },
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Responsible AI and Deployment Guide&lt;/H3&gt;
&lt;P&gt;Microsoft’s principled approach enables developers to build rich solutions while ensuring responsible use.&lt;/P&gt;
&lt;P&gt;Responsible &lt;A href="https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/" target="_blank" rel="noopener"&gt;deployment recommendations&lt;/A&gt; for spatial analysis is provided in accordance with Microsoft &lt;A href="https://www.microsoft.com/ai/responsible-ai" target="_blank" rel="noopener"&gt;Responsible AI Principles&lt;/A&gt;: fairness, reliability &amp;amp; safety, privacy &amp;amp; security, inclusiveness, transparency, and human accountability. For general guidelines and specific recommendations for height, angle, and camera-to-focal-point-distance, see &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-camera-placement" target="_blank" rel="noopener"&gt;Camera placement guide&lt;/A&gt;. And refer to Face API &lt;A href="https://azure.microsoft.com/en-us/resources/transparency-note-azure-cognitive-services-face-api/" target="_blank" rel="noopener"&gt;Transparency Note&lt;/A&gt; to get clear guidance on use of facial recognition to help ensure it fits your goals and achieve accurate results.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get Started&lt;/H3&gt;
&lt;P&gt;Learn more with our documentation &lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-container" target="_blank" rel="noopener"&gt;Spatial analysis&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/quickstarts/client-libraries?tabs=visual-studio&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;QuickStart: Face Service&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Follow the tutorial to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_blank" rel="noopener"&gt;Create a People Counting Web App&lt;/A&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/tutorials/faceapiincsharptutorial" target="_blank" rel="noopener"&gt;Detect faces using the .NET SDK&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Learn about &lt;A href="https://techcommunity.microsoft.com/Azure%20Stack%20Edge" target="_blank" rel="noopener"&gt;Azure Stack Edge&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/services/iot-hub" target="_blank" rel="noopener"&gt;Azure IoT Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 08 Mar 2021 21:11:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/mask-detection-now-available-in-preview-via-azure-cognitive/ba-p/2194157</guid>
      <dc:creator>vaparth</dc:creator>
      <dc:date>2021-03-08T21:11:35Z</dc:date>
    </item>
    <item>
      <title>Put AI into practice with Microsoft's Azure AI Hackathon</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/put-ai-into-practice-with-microsoft-s-azure-ai-hackathon/ba-p/2193807</link>
      <description>&lt;P&gt;If you’ve been looking for a reason to get started with AI to solve a particular problem or use case, look no further! We invite you to put your skills to the test and apply Azure AI to a new or existing project. As you may have seen in an earlier &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678" target="_blank" rel="noopener"&gt;post by Anand Raman&lt;/A&gt;, we have been hosting an &lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Azure AI hackathon&lt;/A&gt; in which you can submit your project and be eligible to win prizes. Developers of all backgrounds and skill levels are welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models with Cognitive Services or building your own custom ML models with Azure Machine Learning.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azureai.devpost.com/" target="_self"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_0-1615224072279.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261685i967787DF729913DB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_0-1615224072279.png" alt="AI Hackathon homepage" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;AI Hackathon homepage&lt;/span&gt;&lt;/span&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you’re interested in participating, visit the &lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Azure AI Hackathon page&lt;/A&gt; to get started. The deadline is April 5&lt;SUP&gt;th&lt;/SUP&gt; so you still have time to build and submit a project! Use one or more of the following Azure AI services to build a new project or update an existing project:&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank"&gt;Azure Machine Learning&lt;/A&gt;,&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/" target="_blank"&gt;Azure Cognitive Services&lt;/A&gt;, &lt;A href="https://github.com/microsoft/botframework-sdk" target="_self"&gt;Bot Framework&lt;/A&gt; and&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_blank"&gt;Azure Cognitive Search&lt;/A&gt;.&amp;nbsp;Projects may integrate with other Azure services, open source technologies (including but not limited to frameworks, libraries, and APIs) and physical hardware of your choice.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you’re looking for a little inspiration, below are a few examples of past winners:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 First Place– Trashé&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_1-1615224072288.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261684iA67BCD961646B0CE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_1-1615224072288.png" alt="Trashe Smarter Recycling solution" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Trashe Smarter Recycling solution&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Submitted by Nathan Glover and Stephen Mott, Trashé is a SmartBin that aims to help people make more informed recycling decisions. While the idea is super impactful, it’s even more powerful when you see it in action- not just the intelligence, but the end-to-end scenario of how it can be applied in a real-world environment.&lt;/P&gt;
&lt;P&gt;This team used many Azure services to connect the hardware, intelligence, and presentation layers—you can see this is a well-researched architecture that is reusable in multiple scenarios.&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/classify-images-with-custom-vision-service/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Custom Vision&lt;/A&gt;&amp;nbsp;was a great choice in this case, enabling the team create a well performing model with very little training data. The more we recycle, the better the model will get. It was great to see the setup instructions included to help build unique versions of Trashé so users can contribute to helping the environment by recycling correctly within their local communities—this community approach is incredibly scalable.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 Second Place- AfriFarm&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_2-1615224072292.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261683i8BEE1DF9CE5A2F28/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_2-1615224072292.png" alt="wmendoza_2-1615224072292.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Niza Siwale’s app recognizes crop diseases from images using&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-machine-learning-service/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&amp;nbsp;and publishes the findings so anyone can track disease breakouts. This also provides a real-time update for government agencies to act quickly and provide support to affected communities. As quoted by Niza, this project has an incredible reach to a possible 200 million farmers whose livelihoods depend on farming in Africa.&lt;/P&gt;
&lt;P&gt;Creating a simple Android application where users can take photos of crops to analyze if each farmer is getting information when they need it, users can also contribute their own findings back to the community around them keeping everyone more informed and connected. Using the popular Keras framework along with the Azure Machine Learning service, this project built and deployed a good plant disease recognition model which could be called from the application. Any future work or improved versions of models can be monitored and deployed in a development cycle. From this, the progression of the model can be tracked over time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 Third Place- Water Level Anomaly detector&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_3-1615224072301.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261686i03A8EDC1BC4C9CAE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_3-1615224072301.png" alt="wmendoza_3-1615224072301.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Roy Kincaid’s project identifies drastic changes in water levels using an ultrasonic sensor that could be useful for detecting potential floods and natural disasters. This information can then be used to provide adequate warning for people to best prepare to major changes in their environment. Water Level Anomaly Detector could also be beneficial for long-term analysis of the effects of climate change. This is another great example of an end-to-end intelligent solution.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Roy is well skilled in the hardware and connection parts of this project, so it was brilliant to see the easy integration of the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Anomaly Detector API&lt;/A&gt;&amp;nbsp;from&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Cognitive Services&lt;/A&gt;&amp;nbsp;and to hear how quickly Roy could start using the service. Many IoT scenarios have a similar need for detecting rates and levels; in fact, Roy had hinted at coffee level detector in the future.&amp;nbsp; In a world where we all want to do our part to help the environment, it’s a great example of how monitoring enables us to measure changes over time and be alerted when issues arise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These are just 3 of the past winners and submissions. For more inspiration, visit our &lt;A href="https://azureai2019.devpost.com/project-gallery" target="_blank" rel="noopener"&gt;gallery of past submissions&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_4-1615224072337.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261687iD18EF85D12B00E6A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_4-1615224072337.png" alt="wmendoza_4-1615224072337.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Resources to get started&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Sign up for the Azure AI hackathon&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Visit our &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;AI for Developers resources page&lt;/A&gt; for tutorials and a curated 30-day learning journey&lt;/LI&gt;
&lt;LI&gt;Visit our &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;ML for Data Scientists resources page&lt;/A&gt; for tutorials and a curated 30-day learning journey&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 08 Mar 2021 17:52:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/put-ai-into-practice-with-microsoft-s-azure-ai-hackathon/ba-p/2193807</guid>
      <dc:creator>wmendoza</dc:creator>
      <dc:date>2021-03-08T17:52:41Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing semantic search: Bringing more meaningful results to Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/bc-p/2182172#M183</link>
      <description>&lt;P&gt;&lt;LI-USER uid="199"&gt;&lt;/LI-USER&gt;&amp;nbsp;,&amp;nbsp;we are starting with English for now.&amp;nbsp; The models have been trained to support other languages (like Spanish), but we want to do some more testing on those before we release them. Stay tuned.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Mar 2021 17:02:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/bc-p/2182172#M183</guid>
      <dc:creator>Luis Cabrera-Cordon</dc:creator>
      <dc:date>2021-03-03T17:02:52Z</dc:date>
    </item>
    <item>
      <title>Form Recognizer  now reads more languages, processes IDs and invoices, trains on tables, and more</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428</link>
      <description>&lt;P&gt;Documents contain invaluable information powering core business processes. Extracting information from these documents with minimum manual intervention helps bolster organizational efficiency and productivity. As more and more processes and workflows get automated, the need for new features to help extract text and structures increases.&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce newest updates to Form Recognizer that will be available on March 15, 2021.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What’s New?&lt;/H1&gt;
&lt;P&gt;Form Recognizer v2.1 public preview 3 will be available on March 15, 2021, and it will include:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Extract data from invoices&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Invoices are complex documents that vary in structure and contain data that is vital to organizations business processes. One of the most challenging tasks in extracting data from invoices is extracting data from invoices line items. &amp;nbsp;The Form Recognizer Invoice API now supports line-item extraction, it also extracts now the full line item and its parts – description, amount, quantity, product ID, date and more. With a simple API \ SDK call you can extract all the data from your invoices – text, tables, key value pairs and line items.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_0-1614713996067.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260103iD80DAC23070C7A6D/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_0-1614713996067.png" alt="chril1_0-1614713996067.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 1 Line items are extracted from invoices&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Extract data from IDs&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The new pre-built ID model enables customers to take worldwide passports and U.S. drivers license and return structured data representing the information available on the IDs. The new ID API extract the text and values of interest from IDs such as document number, last name, first name, date of expiration, country and more.&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_1-1614713996212.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260105i889A7DC374BC8FA0/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_1-1614713996212.png" alt="chril1_1-1614713996212.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 2 Pre-built ID model can extract information from passports and US drivers licenses&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Supervised table labeling and training, empty-value labeling&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;In addition the Form Recognizer &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011" target="_blank" rel="noopener"&gt;state of the art deep learning automatic table extraction capabilitie&lt;/A&gt;s it now also enables customer to train and label tables. This new release includes the ability to label line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained and documents are analyzed using this model the new line items will be extracted as part of the JSON output in the documentResults section.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_2-1614713996234.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260104i7EA32F5641870EC1/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_2-1614713996234.png" alt="chril1_2-1614713996234.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 3 Label tables in your training dataset&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to labeling tables, you can now label empty values and regions; if some documents in your training set do not have values for some fields, you can use this so that your model will know to extract values properly from analyzed documents.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_3-1614713996261.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260106iE214516451A73BC4/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_3-1614713996261.png" alt="chril1_3-1614713996261.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Natural reading order, handwriting classification, and page selection&lt;/H2&gt;
&lt;P&gt;With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter to “natural” value for a more human-friendly reading order output as shown in the following example. Note the first column’s text lines output in order before listing the second, and the third, column.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_4-1614713996294.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260107i84780601C0EEF0A2/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_4-1614713996294.png" alt="chril1_4-1614713996294.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, for Latin languages, Form Recognizer will classify Latin-languages only text lines as handwritten style or not and give a confidence score, as seen below.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_5-1614713996397.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260108i151C506F95A6BAC2/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_5-1614713996397.png" alt="chril1_5-1614713996397.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_6-1614713996398.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260109i8241F1C3E8178E12/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_6-1614713996398.png" alt="chril1_6-1614713996398.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Furthermore, when analyzing a multi-page PDF or TIFF, you can now specify which pages you want to analyze.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Pre-built Receipt model quality improvements&lt;/H2&gt;
&lt;P&gt;This new update includes a number of quality improvements for the pre-built Receipt model, especially around line item extraction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Our Customers &amp;amp; Partners&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_0-1614714476433.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260159i3A74F9A570AD7DA6/image-size/small?v=v2&amp;amp;px=200" role="button" title="chril1_0-1614714476433.png" alt="chril1_0-1614714476433.png" /&gt;&lt;/span&gt;AvidXchange has developed an account payable automation solution leveraging Form Recognizer. “By partnering with Microsoft, we’re able to deliver an accounts payable automation solution for the middle market that’s truly powered by machine learning,” said Chris Tinsley, Chief Technology Officer at AvidXchange. “Our customers will benefit from faster invoice processing times and increased accuracy so we can help ensure their suppliers are paid the right amount, at the right time.”&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_1-1614714476450.png" style="width: 137px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260158i9C4223F271AD1BBF/image-dimensions/137x72?v=v2" width="137" height="72" role="button" title="chril1_1-1614714476450.png" alt="chril1_1-1614714476450.png" /&gt;&lt;/span&gt;WEX has developed a tool to process Explanation of Benefits documents using Form Recognizer. Matt Dallahan, Senior Vice President of Product Management and Strategy, said “The technology is truly amazing. I was initially worried that this type of solution would not be feasible, but I soon realized that the Form Recognizer can read virtually any document with accuracy.”&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_2-1614714476568.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260160i5B7B8BB596514014/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_2-1614714476568.png" alt="chril1_2-1614714476568.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_3-1614714476570.png" style="width: 122px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260161i1EC2D3C6CC8F32AF/image-dimensions/122x31?v=v2" width="122" height="31" role="button" title="chril1_3-1614714476570.png" alt="chril1_3-1614714476570.png" /&gt;&lt;/span&gt;GEP has developed an invoice processing solution for a client using Form Recognizer. “At GEP, we are seeing AI and automation make a profound impact on procurement and the supply chain. By combining our AI solution with Microsoft Form Recognizer, we automated the processing of 4,000 invoices a day for a client, saving them tens of thousands of hours of manual effort, while improving accuracy, controls and compliance on a global scale,” said Sarateudu Sethi, GEP’s Vice President of Artificial Intelligence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_4-1614714476596.png" style="width: 95px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260162i739B71AE5113A9D6/image-dimensions/95x45?v=v2" width="95" height="45" role="button" title="chril1_4-1614714476596.png" alt="chril1_4-1614714476596.png" /&gt;&lt;/span&gt;&amp;nbsp;“At Cross Masters, using cutting-edge AI technologies is not only a passion, it is an essential part of our work culture that requires continuous innovation. One of our latest success stories is automation of manual paperwork, required to process thousands of invoices. Thanks to Microsoft Form Recognizer’s AI engine we were able to develop a unique customized solution that provides to our clients market insights from large set of collected invoices. What we find the most convenient is human beating extraction quality and continuous introduction of new features, such as model composing or table labelling. This assures our client’s market advantage and helps our product to be the best-in-class solution” Jan Hornych, Head of Marketing Automation, Cross Masters&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Try out Form Recognizer&lt;/H1&gt;
&lt;P&gt;To get started with &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;, please login to the &lt;A href="https://azure.microsoft.com/en-us/features/azure-portal/" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt; to create a Form Recognizer resource. Once your resource is created, you can start exploring Form Recognizer, with the improvements mentioned above coming on March 15. You can learn more about Form Recognizer &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Mar 2021 23:59:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428</guid>
      <dc:creator>christina-lee</dc:creator>
      <dc:date>2021-03-04T23:59:38Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing semantic search: Bringing more meaningful results to Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/bc-p/2180954#M182</link>
      <description>&lt;P&gt;Hi Luis, in which language are semantic search supported? Is it Spanish supported in this preview?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Mar 2021 07:35:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/bc-p/2180954#M182</guid>
      <dc:creator>Alberto Diaz Martin</dc:creator>
      <dc:date>2021-03-03T07:35:26Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2180933#M181</link>
      <description>&lt;P&gt;Any response to&amp;nbsp;&lt;LI-USER uid="914387"&gt;&lt;/LI-USER&gt;'s question "&lt;SPAN&gt;how&amp;nbsp; to remove short answer after publishing qna maker KB?" I am facing the same issue!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Mar 2021 07:23:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2180933#M181</guid>
      <dc:creator>julianportelli</dc:creator>
      <dc:date>2021-03-03T07:23:44Z</dc:date>
    </item>
    <item>
      <title>Introducing semantic search: Bringing more meaningful results to Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/ba-p/2175636</link>
      <description>&lt;P&gt;A few years ago, it became clear to our team that AI could bring value to our customers, from improvements in ingestion to data exploration. We knew we had a lot of these valuable assets around Microsoft, so our team set out on a mission to bring as much “intelligence” as we could to the product then known as Azure Search.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the first phase of this mission, we took on "unsearchable" content; about 80% of business relevant data is in unstructured formats such as PDFs, PowerPoints, Word documents, JPEGs, CSVs, etc. We added AI powered enrichments to our ingestion process, enabling the ability to extract structure, insights and transform information from your data.&amp;nbsp; These capabilities were well received by our customers, culminating in a product rebrand as "&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_self"&gt;Azure Cognitive Search&lt;/A&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am happy to announce that in our continuation of this journey, we are bringing state of the art AI capabilities to the “head” of our product, the core search sub-system. In partnership with the Bing team, we have integrated their semantic search investments (100s of development years and millions of dollars in compute time) into our query infrastructure, effectively enabling any developer to leverage this investment over searchable content that you own and manage. We believe semantic search on Azure Cognitive Search offers the best combination of search relevance, developer experience, and cloud service capabilities available on the market.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This post explains what new capabilities are available to you and how you can get started today. I would also encourage you to look at the post called “&lt;A href="https://aka.ms/ScienceBehindSemanticSearchPost" target="_blank" rel="noopener"&gt;Bing’s AI behind semantic search&lt;/A&gt;” that goes deeper into the Bing technology that made semantic search possible.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are launching several exciting semantic search features in a public preview:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Semantic Ranking&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Customers have grown accustomed to using natural language queries in web search engines, but these queries usually do not perform as well when using a traditional keyword-based retrieval approach with ranking only based on term frequencies. To demonstrate this, consider what happens when a customer types a query like “&lt;EM&gt;how to add a user in Office&lt;/EM&gt;” in the Microsoft documentation. &amp;nbsp;For this purpose, we loaded all the Microsoft documentation dataset into Azure Cognitive Search so that we could compare the results between the default lexical based ranking algorithm and the semantic ranking algorithm.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Traditional retrieval and ranking approach&lt;/H3&gt;
&lt;P&gt;The default ranker (BM25) &lt;EM&gt;uses&lt;/EM&gt; words as discrete units and predicts relevance by using the frequencies of terms in the corpus. &amp;nbsp;BM25 works well when searching for keywords, but it struggles to find the most relevant documents when issuing a natural language query.&lt;/P&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="keyword-search.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259293iDD72049F2FB2DBA7/image-size/large?v=v2&amp;amp;px=999" role="button" title="keyword-search.png" alt="keyword-search.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note that the results do meet the lexical frequency requirements. For instance, inspecting the top document “&lt;A href="https://docs.microsoft.com/en-us/office/dev/add-ins/testing/testing-and-troubleshooting" target="_blank" rel="noopener"&gt;Troubleshoot user errors with Office Add-ins&lt;/A&gt;” shows that there are lot of mentions of terms like “office”, “user”, “add” and “how to” in the document – but unfortunately the article is not providing the information we meant to query for.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Semantics-based ranking&lt;/H3&gt;
&lt;P&gt;With the release of semantic search, now we can enable a ranking algorithm that will use deep neural networks to rank the articles based on how “meaningful” they are relative to the query. Internally, this is a ranker that is applied on top of the results returned by the BM25-based ranker.&amp;nbsp; Using semantic search capabilities, these are the top results for our query:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-ranking.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259297i7D3F92BD5FC051FD/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-ranking.png" alt="semantic-ranking.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;I read the content of the top-document called “&lt;A href="https://docs.microsoft.com/en-us/microsoft-365/admin/add-users/add-users?view=o365-worldwide" target="_blank" rel="noopener"&gt;Add users and assign licenses at the same time&lt;/A&gt;”, and it is clear that this is exactly the document I need! Semantic search made this connection even though the title and content are not syntactically close to my query.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Semantic Answers&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;In the previous example, the title of the document by itself did not make it very easy for me to catch if that was a relevant document or not. I still had to read it to find the snippet in the documentation that told me how to add a user to Office.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The good news is that now you can also get semantic answers! It is one of my favorite features; it uses an AI model that extracts relevant passages from the top documents, and then ranks them on their likelihood of being an answer to the query. If we find a passage that has a high likelihood of answering the question, we will promote it as a semantic answer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is what it looks like, in this case. Note that we even leveraged a model from Bing to provide highlights for the most relevant section in the semantic answer.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-answer.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260399iB2DAD3E58640B6E1/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-answer.png" alt="semantic-answer.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;STRONG style="color: inherit; font-family: inherit; font-size: 24px;"&gt;Semantic Captions&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Similarly, we can extract the most relevant section of each document returned so you can quickly skim through the results and see if they have the content that you care about; making it easier for you to triage the results briefly and go deeper into the ones that you think are relevant given your context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-caption2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260400i62B4FCC1B258B55B/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-caption2.png" alt="semantic-caption2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started today!&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Using semantic search is easy. After you sign up for the preview at &lt;A href="http://aka.ms/semanticpreview" target="_blank" rel="noopener"&gt;http://aka.ms/semanticpreview&lt;/A&gt;, all you need to do is change your query parameters as part of the request as shown below. Note that there is no need to re-index any of your content!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;STRONG&gt;Query parameter&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;queryType&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “semantic” to indicate that you would like semantic ranking and answers.&lt;BR /&gt;Other values supported: “simple” and “full”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;searchFields&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Ordered list of fields that semantic ranking should be applied on. If you have a title or a short field that describes your document, we recommend that to be your first field.&amp;nbsp; Follow that by the url (if any), then the body of the document, and then any other relevant fields.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;queryLanguage&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;“en-us” is the only supported value today.&lt;BR /&gt;We will be adding more languages soon. Stay tuned.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;speller&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “lexicon” if you would like spell correction to occur on the query terms. Otherwise set to “none”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;answers&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “extractive” if you would like to get extractive answers. Otherwise set to “none”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;STRONG&gt;Sample Query&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-preview     
{   
      "search": " Where was Alan Turing born?",   
      "queryType": "semantic", 
      "searchFields": "title,url,body", 
      "queryLanguage": "en-us", 
      "speller": "lexicon",
      "answers": "extractive"  
}   
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Sample Response&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "@search.answers": [
        {
            "key": "a1234",               
            "text": "Turing was born in Maida Vale, London, while his father, Julius…",
            "highlights": " Turing was born in &amp;lt;strong&amp;gt;Maida Vale, London&amp;lt;/strong&amp;gt; , while …",
            "score": 0.87802511
        }
    ],
    "value": [
        {
            "@search.score": 51.64714,
            "@search.rerankerScore": 1.9928148165345192,
            "@search.captions": [
                {
                    "text": " Alan Mathison Turing, (born June 23, 1912, 
                             London, England—died June 7, 1954…",
                    "highlights": " Alan Mathison Turing, (born June 23, 1912,
                             &amp;lt;strong/&amp;gt;London, England&amp;lt;/strong&amp;gt;—died June…",
                       }
            ],
            "id": "b5678",
            "body":  "…"
        },
        …  
    ]
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://aka.ms/SemanticMainPage" target="_blank" rel="noopener"&gt;Semantic Search in our documentation.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am personally super excited about these new capabilities, the efficiencies that they will bring to you, and the progression of our vision to bring the best AI capabilities at Microsoft to Azure developers!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Luis Cabrera – on behalf of the Azure Cognitive Search team&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Customers &amp;amp; Partners&lt;/STRONG&gt;&lt;/H4&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90" style="width: 90px; border-style: none;"&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_10" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ppl.png" style="width: 183px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259286i60ABDBE97381C8AE/image-size/large?v=v2&amp;amp;px=999" role="button" title="ppl.png" alt="ppl.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="650" style="border-style: none;"&gt;
&lt;P&gt;&lt;A title="PPL Case Study" href="https://customers.microsoft.com/en-us/story/1344073022379788689-ppl-energy-azure" target="_blank" rel="noopener"&gt;Case Study&lt;/A&gt;:&amp;nbsp;&lt;EM&gt;PPL Electric Utilities Corporation, a utilities company, is working with Neudesic to create a web application with Azure Cognitive search to empower its field workers to find the most relevant information wherever they are.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90" style="border-style: none;"&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_11" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="howden.png" style="width: 213px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259287iF2EBFD672B695BD5/image-size/large?v=v2&amp;amp;px=999" role="button" title="howden.png" alt="howden.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="650" style="border-style: none;"&gt;
&lt;P&gt;&lt;A title="Howden Case Study" href="https://customers.microsoft.com/en-us/story/1344058341075309890-howden-energy-azure-ai" target="_self"&gt;Case Study&lt;/A&gt;:&amp;nbsp;&lt;EM&gt;Howden teamed up with OrangeNXT to further improve Smart Records using key elements of their digitalNXT Search, a fully managed cloud solution powered by Azure Cognitive Search. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Call to Action&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A href="http://aka.ms/semanticpreview" target="_blank" rel="noopener"&gt;Preview sign-up form&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/3-10-21-announcing-an-azure-cognitive-search-ama/m-p/2157224" target="_blank" rel="noopener"&gt;Cognitive Search Team Ask Me Anything (March 10 2021)&lt;/A&gt; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN&gt;&lt;STRONG&gt;Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A href="https://aka.ms/SemanticMainPage" target="_blank" rel="noopener"&gt;Documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://aka.ms/SemanticSearchMechanicsVideo2" target="_blank" rel="noopener"&gt;Mechanics Video&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://aka.ms/ScienceBehindSemanticSearchPost" target="_blank" rel="noopener"&gt;Bing science behind semantic search&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Mar 2021 00:00:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/ba-p/2175636</guid>
      <dc:creator>Luis Cabrera-Cordon</dc:creator>
      <dc:date>2021-03-05T00:00:36Z</dc:date>
    </item>
    <item>
      <title>Re: Simplify and accelerate AI for the entire data science team with Azure Machine Learning designer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/simplify-and-accelerate-ai-for-the-entire-data-science-team-with/bc-p/2173403#M175</link>
      <description>&lt;P&gt;We have a subscription to Azure Classic, without additional charges are we able to use the new Azure Machine Learning?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Mar 2021 05:15:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/simplify-and-accelerate-ai-for-the-entire-data-science-team-with/bc-p/2173403#M175</guid>
      <dc:creator>Gowindarajan</dc:creator>
      <dc:date>2021-03-01T05:15:04Z</dc:date>
    </item>
    <item>
      <title>Re: Get skilled on AI and ML – on your terms with Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/bc-p/2170165#M174</link>
      <description>&lt;P&gt;{&lt;/P&gt;&lt;P&gt;&amp;nbsp; "type": "object",&lt;/P&gt;&lt;P&gt;&amp;nbsp; "properties": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "id": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "integer",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "format": "int64"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "scaDetails": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "Strong Customer Authentication challenge event information",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "object",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "required": [&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "clientEventId",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "eventId"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; ],&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "example": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "clientEventId": "47875409",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "eventId": "abcd1234"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "properties": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "clientEventId": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "2FA client event id"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "eventId": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "2FA event id"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "link": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "2FA event resource reference"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "link": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "object",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "properties": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "href": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "rel": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "status": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "enum": [&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "PROCESSED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "FAILED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "CANCELED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "CUSTOMER_REDIRECT",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "SMS_VERIFICATION_REQUIRED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "MISSING_SENDER_DOB",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "MISSING_SENDER_ADDRESS",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "BALANCE_NOT_ENOUGH",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "IN_PROGRESS",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "PENDING",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "PENDING_REVIEW",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "PENDING_KYC",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "PENDING_SCREENING",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "SCHEDULED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "B{&lt;/P&gt;&lt;P&gt;&amp;nbsp; "type": "object",&lt;/P&gt;&lt;P&gt;&amp;nbsp; "properties": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "clientEventId": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "Transaction related with this SCA Event"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; },&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; "eventId": {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "type": "string",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; "description": "Unique SCA Event Id"&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&amp;nbsp;74 1050 1575 1000 0097 3495 4788&amp;nbsp;&lt;PRE&gt;123456789101112131415161718192021222324252627282930313233343536
# This file is part of PulseAudio.
#
# PulseAudio is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# PulseAudio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU

&lt;/PRE&gt;&amp;nbsp; }&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp; }&lt;/P&gt;&lt;P&gt;}ANK_UNSUPPORTED",&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; "SCA_CHALLENGE"&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; ]&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; }&lt;/P&gt;&lt;P&gt;&amp;nbsp; }&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;</description>
      <pubDate>Fri, 26 Feb 2021 23:16:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/bc-p/2170165#M174</guid>
      <dc:creator>h2Productsh2__di</dc:creator>
      <dc:date>2021-02-26T23:16:25Z</dc:date>
    </item>
    <item>
      <title>Re: Computer Vision for spatial analysis at the Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/2167022#M173</link>
      <description>&lt;P&gt;&lt;LI-USER uid="717444"&gt;&lt;/LI-USER&gt;&amp;nbsp;&lt;LI-USER uid="848076"&gt;&lt;/LI-USER&gt;&amp;nbsp;&amp;nbsp;Thanks for your interest in Spatial Analysis.&amp;nbsp;&lt;SPAN&gt;W&lt;/SPAN&gt;&lt;SPAN&gt;e recommend NVIDIA T4 for production deployments. You may use a CUDA Compute Capable devices 6.0 or higher for testing ( e.g.:&amp;nbsp;NVIDIA 1080Ti or 2080Ti&amp;nbsp;).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 22:11:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/2167022#M173</guid>
      <dc:creator>jfilcik</dc:creator>
      <dc:date>2021-02-25T22:11:42Z</dc:date>
    </item>
    <item>
      <title>Ombromanie: Creating Hand Shadow stories with Azure Speech and TensorFlow.js Handposes</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/2166579</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_0-1613690550387.jpeg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255920iBE801D0624C8D1B8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_0-1613690550387.jpeg" alt="jelooper_0-1613690550387.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Even a tea light suffices to create a great effect&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_1-1613690550389.jpeg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255921i2CEDDF41FB98B141/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_1-1613690550389.jpeg" alt="jelooper_1-1613690550389.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;This TikTok creator has thousands of views for their handshadow tutorials&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;But what's a developer to do when trying to capture that #cottagecore vibe in a web app?&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#high-tech-for-the-cottage" target="_blank" rel="noopener" name="high-tech-for-the-cottage"&gt;&lt;/A&gt;High Tech for the Cottage&lt;/H2&gt;
&lt;P&gt;While exploring the art of hand shadows, I wondered whether some of the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/jlooper/posedance" target="_blank" rel="noopener"&gt;recent work&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class=" fluidvids"&gt;&lt;IFRAME src="https://www.youtube.com/embed/ZWvZBEeS4qQ" width="710" height="399" allowfullscreen="allowfullscreen" class=" fluidvids-elem" loading="lazy" data-mce-fragment="1"&gt;&lt;/IFRAME&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this!&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#a-show-of-hands" target="_blank" rel="noopener" name="a-show-of-hands"&gt;&lt;/A&gt;A Show Of Hands&lt;/H2&gt;
&lt;P&gt;When you start researching hand poses, it's striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_2-1613690550390.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255922iC341945EDADA45EE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_2-1613690550390.png" alt="jelooper_2-1613690550390.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;MSR throwing hands&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;There are dozens of handpose libraries already on GitHub:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/topics/hand-tracking" target="_blank" rel="noopener"&gt;An entire GitHub topic on hand tracking&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/xinghaochen/awesome-hand-pose-estimation" target="_blank" rel="noopener"&gt;'Awesome' list for hand tracking&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://sites.google.com/view/hands2019/challenge" target="_blank" rel="noopener"&gt;Challenges and hackathons&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;There are many applications where tracking hands is a useful activity:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;• Gaming&lt;BR /&gt;• Simulations / Training&lt;BR /&gt;• "Hands free" uses for remote interactions with things by moving the body&lt;BR /&gt;• Assistive technologies&lt;BR /&gt;• TikTok effects :trophy:&lt;/img&gt;&lt;BR /&gt;• Useful things like&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://mcclanahoochie.com/accordionhands/" target="_blank" rel="noopener"&gt;Accordion Hands apps&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the more interesting new libraries,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://dev.to/midiblocks/introducing-handsfree-js-integrate-hand-face-and-pose-gestures-to-your-frontend-4g3p" target="_blank" rel="noopener"&gt;handsfree.js&lt;/A&gt;, offers an excellent array of demos in its effort to move to a hands free web experience:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_3-1613690550409.gif" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255925i68693FBE43C49072/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_3-1613690550409.gif" alt="jelooper_3-1613690550409.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Handsfree.js, a very promising project&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;As it turns out, hands are pretty complicated things. They&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;each&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;include 21 keypoints (vs PoseNet's 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_4-1613690550394.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255924i16174859EC1DA772/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_4-1613690550394.png" alt="jelooper_4-1613690550394.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js's handposes, and MediaPipe's. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe's handposes are perfect for our project. We will have to compromise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://github.com/tensorflow/tfjs-models/tree/master/handpose" target="_blank" rel="noopener"&gt;TensorFlow.js's handposes&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://google.github.io/mediapipe/solutions/hands" target="_blank" rel="noopener"&gt;MediaPipe's handpose models&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;One other library,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/andypotato/fingerpose" target="_blank" rel="noopener"&gt;fingerposes&lt;/A&gt;, is optimized for finger spelling in a sign language context and is worth a look.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Since it's more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's get to work to build this app.&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#scaffold-a-static-web-app" target="_blank" rel="noopener" name="scaffold-a-static-web-app"&gt;&lt;/A&gt;Scaffold a Static Web App&lt;/H2&gt;
&lt;P&gt;As a Vue.js developer, I always use the Vue CLI to scaffold an app using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;vue create my-app&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;app&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and creating an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;api&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder to include an Azure function to store a key (more on this in a minute).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow-models/handpose&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^0.0.6&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-backend-cpu&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-backend-webgl&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-converter&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-core&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="p"&gt;...&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;microsoft-cognitiveservices-speech-sdk&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^1.15.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#set-up-the-view" target="_blank" rel="noopener" name="set-up-the-view"&gt;&lt;/A&gt;Set up the View&lt;/H2&gt;
&lt;P&gt;We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight html"&gt;&lt;CODE&gt;&lt;SPAN class="nt"&gt;&amp;lt;div&lt;/SPAN&gt; &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"canvas-wrapper column is-half"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
&lt;SPAN class="nt"&gt;&amp;lt;canvas&lt;/SPAN&gt; &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"output"&lt;/SPAN&gt; &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"output"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&amp;lt;/canvas&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;video&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"video"&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"video"&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;playsinline&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;style=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"
          -webkit-transform: scaleX(-1);
           transform: scaleX(-1);
           visibility: hidden;
           width: auto;
           height: auto;
           position: absolute;
         "&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;gt;&amp;lt;/video&amp;gt;&lt;/SPAN&gt;
 &lt;SPAN class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/SPAN&gt;
 &lt;SPAN class="nt"&gt;&amp;lt;div&lt;/SPAN&gt; &lt;SPAN class="na"&gt;class=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"column is-half"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;canvas&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;class=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"has-background-black-bis"&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"shadowCanvas"&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"shadowCanvas"&lt;/SPAN&gt;
     &lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;/canvas&amp;gt;&lt;/SPAN&gt;
&lt;SPAN class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#load-the-model-start-keyframe-input" target="_blank" rel="noopener" name="load-the-model-start-keyframe-input"&gt;&lt;/A&gt;Load the Model, Start Keyframe Input&lt;/H2&gt;
&lt;P&gt;Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video's keyframes for hand poses. It's important at these steps to ensure error handling in case the model fails to load or there's no webcam available.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;mounted&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;tf&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;setBackend&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;backend&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
    &lt;SPAN class="c1"&gt;//async load model, then load video, then pass it to start landmarking&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;model&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;handpose&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;load&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Model is loaded! Now loading video&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;try&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;loadVideo&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="k"&gt;catch&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;throw&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;

    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;landmarksRealTime&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
  &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#setup-the-webcam" target="_blank" rel="noopener" name="setup-the-webcam"&gt;&lt;/A&gt;Setup the Webcam&lt;/H2&gt;
&lt;P&gt;Still working asynchronously, set up the camera to provide a stream of images&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;setupCamera&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;!&lt;/SPAN&gt;&lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt; &lt;SPAN class="o"&gt;||&lt;/SPAN&gt; &lt;SPAN class="o"&gt;!&lt;/SPAN&gt;&lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getUserMedia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;throw&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Browser API navigator.mediaDevices.getUserMedia not available&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getUserMedia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;({&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;facingMode&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;user&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;VIDEO_WIDTH&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;height&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;VIDEO_HEIGHT&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;return&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Promise&lt;/SPAN&gt;&lt;SPAN class="p"&gt;((&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;resolve&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;srcObject&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;onloadedmetadata&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;resolve&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;};&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#design-a-hand-to-mirror-the-webcams" target="_blank" rel="noopener" name="design-a-hand-to-mirror-the-webcams"&gt;&lt;/A&gt;Design a Hand to Mirror the Webcam's&lt;/H2&gt;
&lt;P&gt;Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas - red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;landmarksRealTime&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="c1"&gt;//start showing landmarks&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//set up skeleton canvas&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;canvas&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;output&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//set up shadowCanvas&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;canvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getContext&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;2d&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getContext&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;2d&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//paint to main&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;clearRect&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; 
  &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;strokeStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;red&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fillStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;red&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;translate&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;scale&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;-&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//paint to shadow box&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;clearRect&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowColor&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;black&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowBlur&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowOffsetX&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;150&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowOffsetY&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;150&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;lineWidth&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;lineCap&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;round&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fillStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;white&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;strokeStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;white&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;translate&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;scale&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;-&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//now you've set up the canvases, now you can frame its landmarks&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;frameLandmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#for-each-frame-draw-keypoints" target="_blank" rel="noopener" name="for-each-frame-draw-keypoints"&gt;&lt;/A&gt;For Each Frame, Draw Keypoints&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As the keyframes progress, the model predict new keypoints for each of the hand's elements, and both canvases are cleared and redrawn.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;model&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;estimateHands&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;result&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;landmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawKeypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;result&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;annotations&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;requestAnimationFrame&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;frameLandmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#draw-a-lifelike-hand" target="_blank" rel="noopener" name="draw-a-lifelike-hand"&gt;&lt;/A&gt;Draw a Lifelike Hand&lt;/H2&gt;
&lt;P&gt;Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand's coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Re-identify the fingers and palm:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;     &lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="nl"&gt;thumb&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;2&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;3&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;4&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;indexFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;5&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;6&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;7&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;8&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;middleFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;9&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;10&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;11&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;12&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;ringFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;13&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;14&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;15&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;16&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;pinky&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;17&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;18&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;19&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="nl"&gt;palm&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;5&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;9&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;13&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;17&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;...and draw them to screen:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;    &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Object&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;keys&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="o"&gt;++&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;finger&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="p"&gt;];&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;finger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;map&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;keypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawPath&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;false&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Object&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;keys&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="o"&gt;++&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palm&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="p"&gt;];&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palm&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;map&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;keypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawPath&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story.&lt;/P&gt;
&lt;P&gt;To do this, get a key from the Azure portal for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/?WT.mc_id=academic-14261-cxa" target="_blank" rel="noopener"&gt;Speech Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;by creating a Service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_5-1613690550393.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255923iE817738507CDC464/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_5-1613690550393.png" alt="jelooper_5-1613690550393.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can connect to this service by importing the sdk:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;CODE&gt;import * as sdk from "microsoft-cognitiveservices-speech-sdk";&lt;/CODE&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;...and start audio transcription after obtaining an API key which is stored in an Azure function in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;/api&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;startAudioTranscription&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;try&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="c1"&gt;//get the key&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;response&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;axios&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;get&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;/api/getKey&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;subKey&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;response&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;data&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="c1"&gt;//sdk&lt;/SPAN&gt;

        &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;speechConfig&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;SpeechConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fromSubscription&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;subKey&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;eastus&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;audioConfig&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;AudioConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fromDefaultMicrophoneInput&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;SpeechRecognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;speechConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;audioConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognized&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;s&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;result&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;story&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;push&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;};&lt;/SPAN&gt;

        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;startContinuousRecognitionAsync&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="k"&gt;catch&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end.&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#display-the-story" target="_blank" rel="noopener" name="display-the-story"&gt;&lt;/A&gt;Display the Story&lt;/H2&gt;
&lt;P&gt;In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;captureStream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;60&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt; &lt;SPAN class="c1"&gt;// 60 FPS recording&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;MediaRecorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;mimeType&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;video/webm;codecs=vp9&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ondataavailable&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;chunks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;push&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;data&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}),&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;start&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;500&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;...and displayed below as a video with the storyline in a new div:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;document&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;createElement&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;video&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fullBlob&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;Blob&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;chunks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;downloadUrl&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;window&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;URL&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;createObjectURL&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fullBlob&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;src&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;downloadUrl&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="nb"&gt;document&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getElementById&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;story&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;).&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;appendChild&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;autoplay&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;controls&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;This app can be deployed as an Azure Static Web App using the excellent&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/vscode-azurestaticwebapps" target="_blank" rel="noopener"&gt;Azure plugin for Visual Studio Code&lt;/A&gt;. And once it's live, you can tell durable shadow stories!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_6-1613690550392.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255926i4747DEF9FF5D67A3/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_6-1613690550392.png" alt="jelooper_6-1613690550392.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Try Ombromanie&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/ombromanie" target="_blank" rel="noopener"&gt;here&lt;/A&gt;. The codebase is available&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/ombromanie-code" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Take a look at Ombromanie in action:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/HV__puO1Dco" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" data-mce-fragment="1"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=ca-14261-jelooper" target="_blank" rel="noopener"&gt;Learn more about AI on Azure&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://www.youtube.com/watch?v=h281NX568rU&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=4" target="_blank" rel="noopener"&gt;Azure AI Essentials Video covering speech and language&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://azure.microsoft.com/en-us/free/?OCID=AID3029145&amp;amp;WT.mc_id=ca-14261-jelooper" target="_blank" rel="noopener"&gt;Azure free account sign-up&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 19:09:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/2166579</guid>
      <dc:creator>jelooper</dc:creator>
      <dc:date>2021-02-25T19:09:52Z</dc:date>
    </item>
    <item>
      <title>Responsible Machine Learning with Error Analysis</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/responsible-machine-learning-with-error-analysis/ba-p/2141774</link>
      <description>&lt;DIV class="lia-message-subject-wrapper lia-component-subject lia-component-message-view-widget-subject-with-options"&gt;&lt;SPAN style="color: inherit; font-family: inherit; font-size: 24px;"&gt;Overview&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="lia-message-body-wrapper lia-component-message-view-widget-body"&gt;
&lt;DIV id="bodyDisplay" class="lia-message-body"&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;&lt;STRONG&gt;Website:&lt;/STRONG&gt; &lt;A href="http://erroranalysis.ai/" target="_blank" rel="noopener"&gt;ErrorAnalysis.ai&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Github repository:&lt;/STRONG&gt; &lt;A href="https://github.com/microsoft/responsible-ai-widgets/" target="_blank" rel="noopener"&gt;https://github.com/microsoft/responsible-ai-widgets/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Machine Learning (ML) teams who deploy models in the real world often face the challenges of conducting rigorous performance evaluation and testing for ML models. How often do we read claims such as “Model X is 90% on a given benchmark.” and wonder what does this claim mean for practical usage of the model? In practice, teams are well aware that model accuracy may not be uniform across subgroups of data and that there might exist input conditions for which the model fails more often. Often, such failures may cause direct consequences related to lack of reliability and safety, unfairness, or more broadly lack of trust in machine learning altogether. For instance, when a traffic sign detector does not operate well in certain daylight conditions or for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://interestingengineering.com/tesla-autopilot-mistakes-red-letters-on-flag-for-red-traffic-lights" target="_self" rel="nofollow noopener noreferrer"&gt;unexpected inputs&lt;/A&gt;, even though the overall accuracy of the model may be high, it is still important for the development team to know ahead of time about the fact that the model may not be as reliable in such situations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="besmiranushi_0-1613538210604.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255440i28671D47179C4A7D/image-size/large?v=v2&amp;amp;px=999" role="button" title="besmiranushi_0-1613538210604.png" alt="besmiranushi_0-1613538210604.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 1&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- Error Analysis moves away from aggregate accuracy metrics, exposes the distribution of errors to developers in a transparent way, and enables them to identify &amp;amp; diagnose errors efficiently.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;While there exist several problems with current model assessment practices, one of the most obvious is the usage of aggregate metrics to score models on a whole benchmark. It is difficult to convey a detailed story on model behavior with a single number and yet most of the research and leaderboards operate on single scores. At the same time, there may exist several dimensions of the input feature space that a practitioner may be interested in taking a deep dive and ask questions such as “What happens to the accuracy of the recognition model in a self-driving car when it is dark and snowing outside?” or “Does the loan approval model perform similarly for population cohorts across ethnicity, gender, age, and education?”. Navigating the terrain of failures along multiple potential dimensions like the above can be challenging. In addition, in the longer term, when models are updated and re-deployed frequently upon new data evidence or scientific progress, teams also need to continuously track and monitor model behavior so that&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/research/blog/creating-better-ai-partners-a-case-for-backward-compatibility/" target="_self" rel="noopener noreferrer"&gt;updates do not introduce new mistakes and therefore break user trust&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To address these problems, practitioners often have to create custom infrastructure, which is tedious and time-consuming. To accelerate rigorous ML development, in this blog you will learn how to use the Error Analysis tool for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Getting a deep understanding of how failure is distributed for a model.&lt;/LI&gt;
&lt;LI&gt;Debugging ML errors with active data exploration and interpretability techniques.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The Error Analysis toolkit is integrated within the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/responsible-ai-widgets" target="_self" rel="noopener noreferrer"&gt;Responsible AI Widgets&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;OSS repository, our starting point to provide a set of integrated tools to the open source community and ML practitioners. Not only a contribution to the OSS RAI community, but practitioners can also leverage these assessment tools in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_self" rel="noopener noreferrer"&gt;Azure Machine Learning&lt;/A&gt;, including&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://fairlearn.github.io/" target="_self" rel="nofollow noopener noreferrer"&gt;Fairlearn&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&amp;amp;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://interpret.ml/" target="_self" rel="nofollow noopener noreferrer"&gt;InterpretML&lt;/A&gt;&amp;nbsp;and now Error Analysis in mid 2021.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are interested in learning more about training model updates that remain backward compatible with their previous selves by minimizing regress and new errors, you can also check out our most recent open source library and tool&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/BackwardCompatibilityML/" target="_blank" rel="noopener noreferrer"&gt;BackwardCompatibilityML&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1962279027"&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;To install the Responsible AI Widgets “raiwidgets” package, in your python environment simply run the following to install the raiwidgets package from &lt;A href="https://pypi.org/project/raiwidgets/" target="_blank" rel="noopener"&gt;pypi&lt;/A&gt;. If you do not have interpret-community already installed, you will also need to install this for supporting the generation of model explanations.&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install interpret-community
pip install raiwidgets&lt;/LI-CODE&gt;
&lt;P&gt;Alternatively, you can also clone the open source repository and build the code from scratch:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;git clone https://github.com/microsoft/responsible-ai-widgets.git&lt;/LI-CODE&gt;
&lt;P&gt;You will need to install yarn and node to build the visualization code, and then you can run:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;yarn install
yarn buildall&lt;/LI-CODE&gt;
&lt;P&gt;And install from the raiwidgets folder locally:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;cd raiwidgets
pip install –e .&lt;/LI-CODE&gt;
&lt;P&gt;For more information see the &lt;A href="https://github.com/microsoft/responsible-ai-widgets/blob/main/CONTRIBUTING.md" target="_blank" rel="noopener"&gt;contributing guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;If you intend to run repository tests, in the raiwidgets folder of the repository run:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install -r requirements.txt&lt;/LI-CODE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="toc-hId-154824564"&gt;Getting started&lt;/H2&gt;
&lt;P&gt;This post illustrates the Error Analysis tool by using a binary classification task on income prediction (&amp;gt;50K, &amp;lt;50K). The model under inspection will be trained using the tabular&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://archive.ics.uci.edu/ml/datasets/Census+Income" target="_blank" rel="noopener nofollow noreferrer"&gt;UCI Census Income dataset&lt;/A&gt;, which contains both numerical and categorical features such as age, education, number of working hours, ethnicity, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We can call the error analysis dashboard using the API below, which takes in an explanation object computed by one of the explainers from the interpret-community repository, the model or pipeline, a dataset and the corresponding labels (true_y parameter):&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;ErrorAnalysisDashboard(global_explanation, model, dataset=x_test, true_y=y_test)&lt;/LI-CODE&gt;
&lt;P&gt;For larger datasets, we can downsample the explanation to fewer rows but run error analysis on the full dataset.&amp;nbsp; We can provide the downsampled explanation, the model or pipeline, the full dataset, and then both the labels for the sampled explanation and the full dataset, as well as (optionally) the names of the categorical features:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;ErrorAnalysisDashboard(global_explanation, model, dataset=X_test_original_full,true_y=y_test, categorical_features=categorical_features, true_y_dataset=y_test_full)&lt;/LI-CODE&gt;
&lt;P&gt;All screenshots below are generated using a LGBMClassifier with three estimators. You can directly run this example using the &lt;A href="https://github.com/microsoft/responsible-ai-widgets/tree/main/notebooks" target="_self"&gt;jupyter notebooks in our repository&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="toc-hId--1652629899"&gt;How Error Analysis works&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-834882934"&gt;1. Identification&lt;/H2&gt;
&lt;P&gt;Error Analysis starts with identifying the cohorts of data with a higher error rate versus the overall benchmark error rate. The dashboard allows for error exploration by using either an error heatmap or a decision tree guided by errors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Error Heatmap for Error Identification&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The view slices the data based on a one- or two-dimensional grid of input features. Users can choose the input features of interest for analysis. The heatmap visualizes cells with higher error with a darker red color to bring the user’s attention to regions with high error discrepancy. This is beneficial especially when the error themes are different in different partitions, which happens frequently in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failure.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="heatmap.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255447iA751BBC3C7FE1F8D/image-size/large?v=v2&amp;amp;px=999" role="button" title="heatmap.png" alt="heatmap.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 2&lt;/STRONG&gt;&amp;nbsp;-&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;While the overall error rate for the dataset is 23.65%, the heatmap reveals that the error rates are visibly higher, up to 83%, for individuals with higher education. Error rates are also higher for males vs. females.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Decision Tree for Error Identification&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Very often, error patterns may be complex and involve more than one or two features. Therefore, it may be difficult for developers to explore all possible combinations of features to discover hidden data pockets with critical failure. To alleviate the burden, the binary tree visualization automatically partitions the benchmark data into interpretable subgroups, which have unexpectedly high or low error rates. In other words, the tree leverages the input features to maximally separate model error from success. For each node defining a data subgroup, users can investigate the following information:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Error rate&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- a portion of instances in the node for which the model is incorrect. This is shown through the intensity of the red color.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error coverage&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;– a portion of all errors that fall into the node. This is shown through the fill rate of the node.&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data representation&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- number of instances in the node. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.&lt;/LI&gt;
&lt;/UL&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="tree.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255448iD3C9D47644F86366/image-size/large?v=v2&amp;amp;px=999" role="button" title="tree.png" alt="tree.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 3&lt;/STRONG&gt;&amp;nbsp;– Decision tree that aims at finding failure modes by separating error instances from success instances in the data. The hierarchical error pattern here shows that while the overall error rate is 23.65% for the dataset, it can be as high as 96.77% for individuals who are married, have a capital gain higher than 4401, and a number of education years higher than 12.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Cohort definition and manipulation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To specialize the analysis and allow for deep dives, both error identification views can be generated for any data cohort and not only for the whole benchmark. Cohorts are subgroups of data that the user may choose to save for later use if they wish to come back to those cohorts for future investigation. They can be defined and manipulated interactively either from the heatmap or the tree. They can also be carried over to the next diagnostical views on data exploration and model explanations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cohort manipulation.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255449i286BAA42FA7B9F0C/image-size/large?v=v2&amp;amp;px=999" role="button" title="cohort manipulation.png" alt="cohort manipulation.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 4&lt;/STRONG&gt;&amp;nbsp;- Creating a new cohort for further investigation that focuses on individuals who are married and have capital gain lower than 4401.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--972571529"&gt;2. Diagnosis&lt;/H2&gt;
&lt;P&gt;After identifying cohorts with higher error rates, Error Analysis enables debugging and exploring these cohorts further. It is then possible to gain deeper insights about the model or the data through data exploration and model interpretability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1514941304"&gt;Debugging the data&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Data Explorer&lt;/STRONG&gt;: Users can explore dataset statistics and distributions by selecting different features and estimators along the two axes of the data explorer. They can further compare the subgroup data stats with other subgroups or the overall benchmark data. This view can for instance uncover if certain cohorts are underrepresented or if their feature distribution is significantly different from the overall data, hinting therefore to the potential existence of outliers or unusual covariate shift.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="data explorer.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255450i97C20B813B8E18D8/image-size/large?v=v2&amp;amp;px=999" role="button" title="data explorer.png" alt="data explorer.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Figure 5&lt;/STRONG&gt;&amp;nbsp;-&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;In figure 1 and 2, we discovered that for individuals with a higher number of education years, the model has higher failure rates. When we look at how the data is distributed across the feature “education_num” we can see that a) there are fewer instances for individuals with more than 12 years of education, and b) for this cohort the distribution between lower income (&lt;FONT color="#3366FF"&gt;&lt;STRONG&gt;blue&lt;/STRONG&gt;&lt;/FONT&gt;) and higher income (&lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;orange&lt;/STRONG&gt;&lt;/FONT&gt;) is very different than for other cohorts. In fact, for this cohort there exist more people who have an income higher than 50K, which is not true for the overall data.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Instance views&lt;/STRONG&gt;: Beyond data statistics, sometimes it is useful to merely just observe the raw data along with labels in a tabular or tile form. Instance views provide this functionality and divide the instances into correct and incorrect tabs. By eyeballing the data, the developer can identify potential issues related to missing features or label noise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--292513159"&gt;Debugging the model&lt;/H2&gt;
&lt;P&gt;Model interpretability is a powerful means for extracting knowledge on how a model works. To extract this knowledge, Error Analysis relies on Microsoft’s&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/interpretml" target="_blank" rel="noopener noreferrer"&gt;InterpretML&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;dashboard and library. The library is a prominent contribution in ML interpretability lead by Rich Caruana, Paul Koch, Harsha Nori, and Sam Jenkins. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Global explanations&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Feature Importance&lt;/STRONG&gt;: Users can explore the top K important features that impact the overall model predictions (a.k.a. global explanation) for a selected subgroup of data or cohort. They can also compare feature importance values for different cohorts side by side. The information on feature importance or the ordering is useful for understanding whether the model is leveraging features that are necessary for the prediction or whether it is relying on spurious correlations. By contrasting explanations that are specific to the cohort with those for the whole benchmark, it is possible to understand whether the model behaves differently or in an unusual way for the selected cohort.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Dependence Plot&lt;/STRONG&gt;: Users can see the relationship between the values of the selected feature to its corresponding feature importance values. This shows them how values of the selected feature impact model prediction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="global explanations.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255451i12F5306D2F532C39/image-size/large?v=v2&amp;amp;px=999" role="button" title="global explanations.png" alt="global explanations.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 6&lt;/STRONG&gt;&amp;nbsp;- Global feature explanations for the income prediction model show that marital status and number of education years are the most important features globally. By clicking on each feature, it is possible to observe more granular dependencies. For example, marital statuses like “divorced”, “never married”, “separated”, or “widowed” contribute to model predictions for lower income (&amp;lt;50K). Marital status of “civil spouse” instead contributes to model predictions for higher income (&amp;gt;50K).&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Local explanations&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Global explanations approximate the overall model behavior. For focusing the debugging process on a given data instance, users can select any individual data points (with correct or incorrect predictions) from the tabular instance view to explore their local feature importance values (local explanation) and individual conditional expectation (ICE) plots.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Local Feature Importance&lt;/STRONG&gt;: Users can investigate the top K (configurable K) important features for an individual prediction. Helps illustrate the local behavior of the underlying model on a specific data point.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Individual Conditional Expectation&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;(ICE)&lt;/STRONG&gt;: Users can investigate how changing a feature value from a minimum value to a maximum value impacts the prediction on the selected data instance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Perturbation Exploration (what-if analysis)&lt;/STRONG&gt;: Users can apply changes to feature values of the selected data point and observe resulting changes to the prediction. They can save their hypothetical what-if data points for further comparisons with other what-if or original data points.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local explanation what if.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255452i4FAEA0561185300D/image-size/large?v=v2&amp;amp;px=999" role="button" title="local explanation what if.png" alt="local explanation what if.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 7&lt;/STRONG&gt;&amp;nbsp;- For this individual, the model outputs a wrong prediction, predicting that the individual earns less than 50K, while the opposite is true. With what-if explanations, it is possible to understand how the model would behave if one of the feature values changes. For instance, here we can see that if the individual were 10 years older (age changed from 32 to 42) the model would have made a correct prediction. While in the real world many of these features are not mutable, this sensitivity analysis is intended to further support practitioners with model understanding capabilities.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--2099967622"&gt;Other relevant tools&lt;/H2&gt;
&lt;P&gt;Error Analysis enables practitioners to identify and diagnose error patterns. The integration with model interpretability techniques testifies to the joint power of providing such tools together as part of the same platform. We are actively working towards integrating further considerations into the model assessment experience such as fairness and inclusion (via&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://fairlearn.github.io/" target="_self" rel="nofollow noopener noreferrer"&gt;FairLearn&lt;/A&gt;) as well as backward compatibility during updates (via&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/BackwardCompatibilityML" target="_self" rel="noopener noreferrer"&gt;BackwardCompatibilityML&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-387545211"&gt;Our team&lt;/H2&gt;
&lt;P&gt;The initial work on error analysis started with research investigations on methodologies for in-depth understanding and explanation of Machine Learning failures.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://besmiranushi.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Besmira Nushi&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.ecekamar.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Ece Kamar&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://www.erichorvitz.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Eric Horvitz&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;at Microsoft Research are leading these efforts and continue to innovate with new techniques for debugging ML models. In the past year, our team was extended via a collaboration with the RAI tooling team in the Azure Machine Learning group as well as the Analysis Platform team in Microsoft Mixed Reality. The Analysis Platform team has invested several years of engineering work in building internal infrastructure and now we are making these efforts available to the community as open source as part of the Azure Machine Learning ecosystem. The RAI tooling team consists of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/imatiach/" target="_blank" rel="noopener nofollow noreferrer"&gt;Ilya Matiach&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://cs-people.bu.edu/sameki/" target="_blank" rel="noopener nofollow noreferrer"&gt;Mehrnoosh Sameki&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/romanlutz/" target="_blank" rel="noopener nofollow noreferrer"&gt;Roman Lutz&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/richard-edgar-48aa0613/" target="_blank" rel="noopener nofollow noreferrer"&gt;Richard Edgar&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/hyemisong/" target="_blank" rel="noopener nofollow noreferrer"&gt;Hyemi Song&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/minsoothigpen/" target="_blank" rel="noopener nofollow noreferrer"&gt;Minsoo Thigpen&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/anupshirgaonkar/" target="_blank" rel="noopener nofollow noreferrer"&gt;Anup Shirgaonkar&lt;/A&gt;. They are passionate about democratizing Responsible AI and have several years of experience in shipping such tools for the community with previous examples on FairLearn, InterpretML Dashboard etc. We also received generous help and expertise along the way from our partners at Microsoft Aether Committee and Microsoft Mixed Reality:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://linkedin.com/in/parham-mohadjer-09365b96/" target="_blank" rel="noopener nofollow noreferrer"&gt;Parham Mohadjer&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://linkedin.com/in/paulbkoch/" target="_blank" rel="noopener nofollow noreferrer"&gt;Paul Koch&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/praphat-xavier-fernandes-86574814/" target="_blank" rel="noopener nofollow noreferrer"&gt;Xavier Fernandes&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/juanlema/" target="_blank" rel="noopener nofollow noreferrer"&gt;Juan Lema&lt;/A&gt;. All marketing initiatives, including the presentation of this blog, were coordinated by&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/thuylnguyen/" target="_blank" rel="noopener nofollow noreferrer"&gt;Thuy Nguyen&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Big thanks to everyone who made this possible!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Related research&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure&lt;/STRONG&gt;. Besmira Nushi, Ece Kamar, Eric Horvitz; HCOMP 2018. &lt;A href="https://www.microsoft.com/en-us/research/publication/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Software Engineering for Machine Learning: A Case Study&lt;/STRONG&gt;.&amp;nbsp;Saleema Amershi, Andrew Begel, Christian Bird, Rob DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, Thomas Zimmermann; ICSE 2019. &lt;A href="https://www.microsoft.com/en-us/research/publication/software-engineering-for-machine-learning-a-case-study/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff&lt;/STRONG&gt;.&amp;nbsp;Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, Eric Horvitz; AAAI 2019. &lt;A href="https://www.microsoft.com/en-us/research/publication/updates-in-human-ai-teams-understanding-and-addressing-the-performance-compatibility-tradeoff/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;An Empirical Analysis of Backward Compatibility in Machine Learning Systems&lt;/STRONG&gt;. Megha Srivastava, Besmira Nushi, Ece Kamar, Shital Shah, Eric Horvitz; KDD 2020. &lt;A href="https://www.microsoft.com/en-us/research/publication/an-empirical-analysis-of-backward-compatibility-in-machine-learning-systems/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Understanding Failures of Deep Networks via Robust Feature Extraction&lt;/STRONG&gt;. Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, Eric Horvitz. arXiv 2020. &lt;A href="https://arxiv.org/abs/2012.01750" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 18 Feb 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/responsible-machine-learning-with-error-analysis/ba-p/2141774</guid>
      <dc:creator>besmiranushi</dc:creator>
      <dc:date>2021-02-18T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Translator announces Document Translation (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185</link>
      <description>&lt;P&gt;We are announcing Document Translation, a new feature in Azure Translator service which enables enterprises, translation agencies, and consumers who require volumes of complex documents to be translated into one or more languages preserving structure and format in the original document. Document Translation is an asynchronous batch feature offering translation of large documents eliminating limits on input text size. It supports documents with rich content in different file formats including Text, HTML, Word, Excel, PowerPoint, Outlook Message, PDF, etc. It reconstructs translated documents preserving layout and format as present in the source.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DocTransImage (1).png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255600i94D33232A7D04C97/image-size/large?v=v2&amp;amp;px=999" role="button" title="DocTransImage (1).png" alt="DocTransImage (1).png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Standard translation offerings in the market accept only plain text, or HTML, and limits count of characters in a request. Users translating large documents must parse the documents to extract text, split them into smaller sections and translate them separately. If sentences are split in an unnatural breakpoint it can lose the context resulting in suboptimal translations. Upon receipt of the translation results, the customer has to merge the translated pieces into the translated document. This involves keeping track of which translated piece corresponds to the equivalent section in the original document.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;The problem gets complicated when customers want to translate complex documents having rich content. They convert the original file in variety of formats to either .html or .txt file format and reconvert translated content from html or txt files into original document file format. The transformation may result in various issues. The problem gets compounded when customer needs to translate a) large quantity of documents, b) documents in variety of file formats, c) documents while preserving the original layout and format, d) documents into multiple target languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Document Translation is an asynchronous offering to which the user makes a request specifying location of source and target documents, and the list of target output languages. Document Translation returns a job identifier enabling the user to track the status of the translation. Asynchronously, Document Translation pulls each document from the source location, recognizes the document format, applies right parsing technique to extract textual content in the document, translates the textual content into target languages. It then reconstructs the translated document preserving layout and format as present in the source documents, and stores translated document in a specified location. Document Translation updates the status of translation at the document level. Document Translation makes it easy for the customer to translate volumes of large documents in a variety of document formats, into a list of target languages thus eliminating all the challenges customers face today and improving their productivity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Document Translation enables users to customize translation of documents by providing custom glossaries, a custom translation model id built using &lt;A href="https://portal.customtranslator.azure.ai/" target="_self"&gt;customer translator&lt;/A&gt;, or both as part of the request. Such customization retains specific terminologies and provides domain specific translations in translated documents.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;“Translation of documents with rich formatting is a tricky business. We need the translation to be fluent and matching the context, while maintaining high fidelity in the visual appearance of complex documents. Document Translation is designed to address those goals, relieving client applications from having to disassemble and reassemble the documents after translation, making it easy for developers to build workflows that process full documents with a few simple steps.”, said Chris Wendt, Principal Program Manager.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more about Translator and the Document Translation feature in the video below&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=ZKkoaV1dGew" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/ZKkoaV1dGew/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;References&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/overview" target="_self"&gt;User documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/cognitive-services/translator/" target="_self"&gt;Pricing&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Send your feedback to &lt;A href="mailto:translator@microsoft.com" target="_blank" rel="noopener"&gt;translator@microsoft.com&lt;/A&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Feb 2021 21:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185</guid>
      <dc:creator>Krishna_Doss</dc:creator>
      <dc:date>2021-02-17T21:30:00Z</dc:date>
    </item>
    <item>
      <title>Hello, bot! Conversational AI on Microsoft Platform</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/hello-bot-conversational-ai-on-microsoft-platform/ba-p/2139570</link>
      <description>&lt;DIV&gt;During the pandemic, we all found ourselves in isolation, and relying more and more on effective electronic means of communication. The amount of digital conversations increased dramatically, and we need to rely on bots to help us handle some of those conversations. In this blog post, I give brief overview of conversational AI on Microsoft platform and show you how to build a simple educational bot to help students learn.&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If you prefer video content, here is a great video to get you started, from our &lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;AI Developer Resources&lt;/A&gt; page:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;BR /&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=Nh3S_sljkpI" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/Nh3S_sljkpI/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H1 id="do-we-need-bots-and-when"&gt;Do We Need Bots, and When?&lt;/H1&gt;
&lt;P&gt;Many people believe that in the future we will be interacting with computers using speech, in the same way we interact between each other. While the future is still vague, we can already benefit from conversational interfaces in many areas, for example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In user support, which has traditionally been based on interpersonal communication, automated chat-bots can solve a lot of routine problems for the users, leaving human specialists for solving only unusual cases.&lt;/LI&gt;
&lt;LI&gt;During surgical operation, when hands-free interaction is essential. From personal experience, I find it personally more convenient to set morning alarm and “good night” music through voice assistant before going to sleep.&lt;/LI&gt;
&lt;LI&gt;Automating some functions in interpersonal communication. My favorite example is a chat-bot that you can add to a group chat when organizing a party, and it will track how much money each of the participants spent on the preparation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;At the current state of development of conversational AI technologies, a chat bot will not replace a human, and it will not pass Turing test.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;&lt;FONT size="4"&gt;In practice, chat bots act as an advanced version of a command line, in which you do not need to know exact commands to perform an action.&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Thus, successful bot applications will not try to pretend to be a human, because such behavior is likely to cause some user dissatisfaction in the future. It is one of the &lt;A href="https://www.microsoft.com/ai/ai-lab-conversational-ai?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;responsible conversational AI principles&lt;/A&gt;, which you need to consider when designing a bot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="educational-bots"&gt;Educational Bots&lt;/H1&gt;
&lt;P&gt;During the pandemic, one of the areas that is being transformed the most is education. We can envision educational bots that help student answer most common questions, or act as a virtual teaching assistant. In this blog post, I will show you how to create a simple assistant bot that will be able to handle several questions from the field of Geography.&lt;/P&gt;
&lt;P&gt;Before we jump to this task, let’s talk about Microsoft conversational AI stack in general, and consider different development options.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="conversational-ai-development-stack"&gt;Conversational AI Development Stack&lt;/H1&gt;
&lt;P&gt;When it comes to conversational AI, we can logically think of a conversational agent having two main parts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Conversational interface&lt;/STRONG&gt; handles passing messages from the user to the bot and back. It takes care about communication between user messaging agent (such as Microsoft Teams, Skype or Telegram) and our application logic, and includes code to handle request-response logic.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Intelligent backend&lt;/STRONG&gt; adds some AI functionality to your bot, such are recognizing user’s phrases, or finding best possible answer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A bot can exist without any intelligent backend, but it would not be smart. Still, bots like that are useful for automating simple tasks, such as form filling, or handling some pre-defined workflow.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IMG src="http://soshnikov.com/images/blog/bots-arch.png" border="0" alt="Bots Architecture" width="592" height="359" /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here I present slightly simplified view of the whole &lt;A href="https://github.com/microsoft/botframework-sdk#bot-framework-ecosystem" target="_blank" rel="noopener"&gt;Bot Ecosystem&lt;/A&gt;, but this way it is easier to get the picture.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conversational Interface: Microsoft Bot Framework and Azure Bot Service&lt;/H2&gt;
&lt;P&gt;At the heart of conversational interface is &lt;A href="https://dev.botframework.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Microsoft Bot Framework&lt;/A&gt; - an open-source development framework (with source code available &lt;A href="https://github.com/microsoft/botframework-sdk" target="_blank" rel="noopener"&gt;on GitHub&lt;/A&gt;), which contains useful abstractions for bot development. The main idea of Bot Framework is to abstract communication channel, and develop bots as web endpoints that asynchronously handle request-response communication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;Decoupling of bot logic and communication channel allows you to develop bot code once, and then connect it easily to different platforms, such as Skype, Teams or Telegram. Omnichannel bots are now made simple!&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Bot Framework SDK supports primarily C#, Node.js, Python and Java, although C# or Node.js are highly recommended.&lt;/P&gt;
&lt;P&gt;To host bots developed with Bot Framework on Azure, you use &lt;A href="https://azure.microsoft.com/services/bot-services/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt;. It hosts bot logic itself (either as web application, or azure function), as well as allows you to declaratively define physical channels that your bot will be connected to. You can &lt;A href="https://docs.microsoft.com/azure/bot-service/bot-service-manage-channels?view=azure-bot-service-4.0&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;connect your bot to Skype or Telegram through Azure Portal&lt;/A&gt; with a few simple steps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Intelligent Backend: LUIS and QnA Maker&lt;/H2&gt;
&lt;P&gt;Many modern bots support some form of natural language interaction. To do it, the bot needs to understand the user’s phrase, which is typically done through &lt;STRONG&gt;intent classification&lt;/STRONG&gt;. We define a number of possible &lt;STRONG&gt;intents&lt;/STRONG&gt; or actions that the bot can support, and then map an input phrase to one of the intents.&lt;/P&gt;
&lt;P&gt;This mapping is typically done using a neural network trained on some dataset of sample phrases. To take away the complexity of training your own neural network model, Microsoft provides &lt;STRONG&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Language Understanding Intelligent Service&lt;/A&gt;&lt;/STRONG&gt;, or LUIS, which allows you to train a model either &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-how-to-start-new-app/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;through web interface&lt;/A&gt;, or &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-tutorial-node-import-utterances-csv/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;an API&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-utteranceintentmapping.png" border="0" alt="Bot Utterance-Intent Mapping" width="468" height="188" /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to intent classification, LUIS also performs &lt;STRONG&gt;named entity recognition&lt;/STRONG&gt; (or &lt;A href="https://en.wikipedia.org/wiki/Named-entity_recognition" target="_blank" rel="noopener"&gt;NER&lt;/A&gt;). It can automatically extract some entities of well-known types, such as geolocations or references to date and time, and can learn to extract some user-defined entities as well.&lt;/P&gt;
&lt;P&gt;Having entities extracted and intent correctly determined it should be much easier to program the logic of your bot. This is often done using &lt;STRONG&gt;slot filling&lt;/STRONG&gt; technique: extracted entities from the user’s input populate some slots in a dictionary, and if some more values are required to perform the task - additional dialog is initiated to ask additional info from the user.&lt;/P&gt;
&lt;P&gt;Another type of bot behavior that often comes up is the ability to find best matching phrase or piece of information in some table, i.e. do an &lt;STRONG&gt;intelligent lookup&lt;/STRONG&gt;. It is useful if you want to provide FAQ-style bot that can answer user’s questions based on some database of answers, or if you just want to program chit-chat behavior with some common responses. To implement this functionality, you can use &lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;QnA Maker&lt;/A&gt; - a complex service, that encapsulates &lt;A href="https://docs.microsoft.com/azure/search/search-what-is-azure-search/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt;, and provides simple way to build question-answering functionality. You can index any existing FAQ document, or provide question-answer pairs through the web interface, and then hook up QnA maker to your bot with a few lines of code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bot Development: Composer, Power Virtual Agents or Code?&lt;/H2&gt;
&lt;P&gt;As I mentioned above, bots can be developed using your favorite programming language. However, this approach requires you to write some boilerplate code, understand asynchronous calls, and therefore has a significant learning curve. There are some simpler options that are good for a start!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;It is recommended to start developing your bot using low-code approach through &lt;A href="https://docs.microsoft.com/composer/introduction?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Bot Framework Composer&lt;/A&gt; - an interactive visual tool that allows you to design your bot by drawing dialog diagrams.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Composer integrates LUIS and QnA maker out of the box, so that you do not need to train those services through web interface first, and then worry about integrating them to your bot. From the same UI, you are able to specify events triggered by some user phrases, and dialogs that respond to them.&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-overview-image.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-overview-image.png" border="0" alt="Bot Framework Composer Main UI" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Another similar low-code option would be to use &lt;A href="https://powervirtualagents.microsoft.com/blog/how-to-use-conversational-ai-to-enhance-engagement/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Power Virtual Agents&lt;/A&gt; (PVA), a tool from &lt;A href="https://powerplatform.microsoft.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt; family of tools for business automation. It would be especially useful if you are already familiar with Power Platform, and using any of its tools to enhance productivity. In this case, PWA will be a natural choice, and it will integrate nicely into all your data points and business processes. In short - Composer is a great low-code tool for developers, while PVA is more for business users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="getting-started-with-bot-development"&gt;Getting Started with Bot Development&lt;/H1&gt;
&lt;P&gt;Let me show you how we can start the development of a simple educational bot that will help K-12 students with their geography classes.&amp;nbsp;&lt;SPAN&gt;We will develop a simple bot, which you can later host on Microsoft Azure and connect to most popular communication channels, such as Teams, Slack or Telegram. If you do not have an Azure account, you can&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;get a free trial&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;(or&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/students/?WT.mc_id=ca-13976-dmitryso&amp;amp;OCID=AID3029145" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN&gt;, if you are a student).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;To begin with, we will implement three simple functions in our bot:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Being able to tell a capital city for a country (&lt;EM&gt;What is the capital of Russia?&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;Giving definitions of most useful terms, eg. answering a questions like &lt;EM&gt;What is a capital?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Support for simple chit-chat (&lt;EM&gt;How are you today?&lt;/EM&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Those two functions cover two most important elements of our intelligent backend: &lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;QnA Maker&lt;/A&gt; (which can be used to implement the last two points) and &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;LUIS&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Starting with Bot Composer&lt;/H2&gt;
&lt;P&gt;To begin development, you need to [install Bot Framework Composer][InstallComposer] - I recommend to do it as desktop application. Then, after starting it, click on &lt;STRONG&gt;New&lt;/STRONG&gt; button, and chose starting template for your bot: &lt;STRONG&gt;QnA Maker and LUIS&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-create1.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-create1.png" border="0" alt="Composer Create" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Once you do that, you will see the main screen of composer, with a list of triggers on the left, and the main pane to design dialogs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-mainscreen.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-mainscreen.png" border="0" alt="Composer Main Screen" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here, you can delete an unused trigger &lt;STRONG&gt;BuySurface&lt;/STRONG&gt; (which is left over from the demo), and go to &lt;STRONG&gt;Welcome message&lt;/STRONG&gt; to customize the phrase that the bot says to new user. The logic of Welcome Message trigger is a bit complex, you need to look for a box called &lt;STRONG&gt;Send a response&lt;/STRONG&gt;, and change the message in the right pane.&lt;/P&gt;
&lt;P&gt;The language used to define phrases is called &lt;STRONG&gt;Language generation&lt;/STRONG&gt;, or &lt;STRONG&gt;lg&lt;/STRONG&gt;. A few useful syntax rules to know:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A phrase starts with &lt;CODE class="language-plaintext highlighter-rouge"&gt;-&lt;/CODE&gt;. If want to chose from a number of replies, specify several phrases, and one of them will be selected randomly. For example: ```bash&lt;/LI&gt;
&lt;LI&gt;Hello, I am geography helper bot!&lt;/LI&gt;
&lt;LI&gt;Hey, welcome!&lt;/LI&gt;
&lt;LI&gt;Hi, looking forward to chat with you! ```&lt;/LI&gt;
&lt;LI&gt;Comments start with &lt;CODE class="language-plaintext highlighter-rouge"&gt;&amp;gt;&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Some additional definitions start with &lt;CODE class="language-plaintext highlighter-rouge"&gt;@&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;You can use &lt;CODE class="language-plaintext highlighter-rouge"&gt;${...}&lt;/CODE&gt; syntax for variable substitution (we will see an example of this later)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Connecting to Azure Services&lt;/H2&gt;
&lt;P&gt;To use intelligent backend, you need to create Azure resources for LUIS and QnA Maker and provide corresponding keys to Composer:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-how-to-azure-subscription#create-luis-resources-in-the-azure-portal/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create LUIS Authoring Resource&lt;/A&gt;, and make sure to remember the region in which it was created, and copy key from &lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt; page in Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/how-to/set-up-qnamaker-service-azure?tabs=v1&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create QnA Maker Service&lt;/A&gt;, and copy corresponding key&lt;/LI&gt;
&lt;LI&gt;In Composer, go to bot settings by pressing &lt;STRONG&gt;Project Settings&lt;/STRONG&gt; button in the left menu (look for a wrench icon, or expand the menu if unsure). Under settings, fill in &lt;STRONG&gt;LUIS Authoring Key&lt;/STRONG&gt;, &lt;STRONG&gt;LUIS region&lt;/STRONG&gt; and &lt;STRONG&gt;QnA Maker Subscription key&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Starting the Bot&lt;/H2&gt;
&lt;P&gt;At this point, you can already start chatting with your bot. Click &lt;STRONG&gt;Start bot&lt;/STRONG&gt; in the upper right corner, and let some time for the magic to happen. When starting a bot, Composer actually creates and trains underlying LUIS model, builds bot framework project, and starts local web server with a copy of the bot, ready to serve your requests.&lt;/P&gt;
&lt;P&gt;To chat with a bot, click &lt;STRONG&gt;Test in emulator&lt;/STRONG&gt; button (you need to have &lt;A href="https://github.com/Microsoft/BotFramework-Emulator/blob/master/README.md" target="_blank" rel="noopener"&gt;Bot Framework Emulator&lt;/A&gt;) installed for this to work). This automatically opens up the chat window with all required settings, and you can start talking to your bot right away.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Creating QnA Maker Knowledge base&lt;/H2&gt;
&lt;P&gt;Let’s start with creating term dictionary using QnA Maker. Click on &lt;STRONG&gt;QnA&lt;/STRONG&gt; left menu, and then &lt;STRONG&gt;Create new KB&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-create.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-create.png" border="0" alt="QnA New KB" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Here you can either start from some existing data (provided as an URL to html, pdf, Word or Excel document), or start creating phrases from scratch.&lt;/P&gt;
&lt;P&gt;In most of the cases, you would have a document to start with, but in our case we will start from scratch. After creating a knowledge base, you can click &lt;STRONG&gt;Add QnA Pair&lt;/STRONG&gt; to add all question-answer combinations you need. Note that you can add several options of the question by using &lt;STRONG&gt;All alternative phrasing&lt;/STRONG&gt; link.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-phrases.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-phrases.png" border="0" alt="QnA Phrases" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In our case, I have added a &lt;EM&gt;how are you&lt;/EM&gt; phrase (with several options), and a phrase to explain the meaning of the word &lt;EM&gt;capital&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;Having added the phrases, we can start a bot and make sure that it correctly reacts to given phrases, or similar versions of those phrases - QnA maker does not require it to be an exact match, it looks for &lt;EM&gt;similar&lt;/EM&gt; phrases to make a decision on which answer to provide.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Adding Specific Actions with LUIS&lt;/H2&gt;
&lt;P&gt;To give our bot an ability to give capitals of countries, we need to add some specific functionality to look for a capital, triggered by a certain phrase. We definitely do not want to type all 200+ countries and their capitals into QnA Maker!&lt;/P&gt;
&lt;P&gt;A functionality to get information about a country is openly available via &lt;A href="https://restcountries.eu/" target="_blank" rel="noopener"&gt;REST Countries&lt;/A&gt; API. For example, if we make GET request to &lt;CODE class="language-plaintext highlighter-rouge"&gt;&lt;A href="https://restcountries.eu/rest/v2/name/Russia" target="_blank" rel="noopener"&gt;https://restcountries.eu/rest/v2/name/Russia&lt;/A&gt;&lt;/CODE&gt;, we will get JSON response like this:&lt;/P&gt;
&lt;DIV class="language-json highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="nl"&gt;"name"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"Russian Federation"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
   &lt;SPAN class="nl"&gt;"topLevelDomain"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:[&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;".ru"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
   &lt;SPAN class="nl"&gt;"capital"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"Moscow"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="err"&gt;...&lt;/SPAN&gt; &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;To ask for a capital of a given country, a user will say something like &lt;EM&gt;What is a capital of Russia?&lt;/EM&gt;, or &lt;EM&gt;I want to know the capital of Italy&lt;/EM&gt;. This phrase intent can be recognized using LUIS, and the name of the country is also extracted.&lt;/P&gt;
&lt;P&gt;To add LUIS trigger, from the &lt;STRONG&gt;Design&lt;/STRONG&gt; page of the composer, select your bot dialog and press “…” next to it. You will see &lt;STRONG&gt;Add a trigger&lt;/STRONG&gt; option in the drop-down box. Select it, and then chose &lt;STRONG&gt;Intent recognized&lt;/STRONG&gt; as trigger type.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;EM&gt;&lt;STRONG&gt;Intent Recognized&lt;/STRONG&gt; is the most common trigger type. However, you can specify &lt;STRONG&gt;Dialog events&lt;/STRONG&gt;, that allow you to structure part of the conversation as a separate dialog, or some conversational activities, such as &lt;STRONG&gt;Handoff to human&lt;/STRONG&gt;.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Then, specify trigger phrases, using a variation of LG language. In our case, we will use the following:&lt;/P&gt;
&lt;DIV class="language-json highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;what&lt;/SPAN&gt; &lt;SPAN class="err"&gt;is&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Russia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;?&lt;/SPAN&gt;
&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;I&lt;/SPAN&gt; &lt;SPAN class="err"&gt;want&lt;/SPAN&gt; &lt;SPAN class="err"&gt;to&lt;/SPAN&gt; &lt;SPAN class="err"&gt;know&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Italy&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;.&lt;/SPAN&gt;
&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;Give&lt;/SPAN&gt; &lt;SPAN class="err"&gt;me&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Greece&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;!&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Here, we specify a number of trigger phrases starting with &lt;CODE class="language-plaintext highlighter-rouge"&gt;-&lt;/CODE&gt;, and we indicate that we want to extract part of the phrase as an entity &lt;CODE class="language-plaintext highlighter-rouge"&gt;country&lt;/CODE&gt;. LUIS will automatically train a model to extract entities based on the provided utterances, so make sure to provide a number of possible phrases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;There are some pre-defined entity types, such as &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;datetimeV2&lt;/CODE&gt;, &lt;CODE class="language-plaintext highlighter-rouge"&gt;number&lt;/CODE&gt;&lt;EM&gt;, etc. Using pre-defined types is recommended, and entity type can be specified using &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;@ &amp;lt;entity_type&amp;gt; &amp;lt;entity_name&amp;gt;&lt;/CODE&gt;&lt;EM&gt; notation. In our case, we can use &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;geographyV2&lt;/CODE&gt;&lt;EM&gt; entity type, which extracts geographic locations, including countries.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Once we have defined phrase recognizer, we need to add a block that will do the actual REST call and fetch information on the given country. Use use &lt;STRONG&gt;Send HTTP Request&lt;/STRONG&gt; block, and specify the following parameters:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Method = GET&lt;/LI&gt;
&lt;LI&gt;Url = &lt;CODE class="language-plaintext highlighter-rouge"&gt;&lt;A href="https://restcountries.eu/rest/v2/name/${@country" target="_blank" rel="noopener"&gt;https://restcountries.eu/rest/v2/name/${@country&lt;/A&gt;}&lt;/CODE&gt;. Here, &lt;CODE class="language-plaintext highlighter-rouge"&gt;${@country}&lt;/CODE&gt; will be substituted with the name of the recognized country.&lt;/LI&gt;
&lt;LI&gt;Result property = dialog.result&lt;/LI&gt;
&lt;LI&gt;Response type = json&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This would make the REST call to the API, and the result will be stored into &lt;CODE class="language-plaintext highlighter-rouge"&gt;dialog.result&lt;/CODE&gt; property. If we provided a valid country, json result will be automatically parsed, otherwise, invalid operation code will be recorded in &lt;CODE class="language-plaintext highlighter-rouge"&gt;dialog.result.statusCode&lt;/CODE&gt; - in our case, 404.&lt;/P&gt;
&lt;P&gt;To test if the call was successful and define different logic based on the result, we insert &lt;STRONG&gt;Branch: If/Else&lt;/STRONG&gt; block, and specify the following condition: &lt;CODE class="language-plaintext highlighter-rouge"&gt;= equals(dialog.result.statusCode,200)&lt;/CODE&gt;. True condition will correspond to the left branch, and we will insert &lt;STRONG&gt;Send a response&lt;/STRONG&gt; block there, with the following text:&lt;/P&gt;
&lt;DIV class="language-plaintext highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;- A capital of ${@country} is ${dialog.result.content[0].capital}
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In case result code is not 200, the right branch will be executed, where we will insert an error message. Our final dialog should look like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-cooldialog.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-cooldialog.png" border="0" alt="" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Adding Preconfigured Chit-Chat Functionality&lt;/H2&gt;
&lt;P&gt;It would be nice if your bot could respond to more everyday phrases, such as &lt;EM&gt;How old are you?&lt;/EM&gt;, or &lt;EM&gt;Do you enjoy being a bot&lt;/EM&gt;. We can define all those phrases in QnA Maker, but that would take us quite some time to do so. Luckily, there is &lt;A href="https://github.com/microsoft/BotBuilder-PersonalityChat?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Project Personality Chat&lt;/A&gt; that contains a number of pre-defined QnA Maker knowledge bases for several languages, and for a number of personalities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Professional&lt;/LI&gt;
&lt;LI&gt;Friendly&lt;/LI&gt;
&lt;LI&gt;Witty&lt;/LI&gt;
&lt;LI&gt;Caring&lt;/LI&gt;
&lt;LI&gt;Enthusiastic&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can grab a link to the knowledge base &lt;A href="https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets" target="_blank" rel="noopener"&gt;from here&lt;/A&gt;, then go to &lt;A href="http://qnamaker.ai" target="_blank" rel="noopener"&gt;QnA Maker Portal&lt;/A&gt;, find your knowledge base, and add this URL link to your service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-chitchat.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-chitchat.png" border="0" alt="Adding URL to QnAMaker" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Having done that, click &lt;STRONG&gt;Save and Train&lt;/STRONG&gt;, and enjoy a talk to your bot! You can even try ask it about &lt;A href="https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#The_Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_is_42" target="_blank" rel="noopener"&gt;life, universe and everything&lt;/A&gt;!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Testing the Bot and Publishing to Azure&lt;/H2&gt;
&lt;P&gt;Now that our basic bot functionality is complete, we can test the bot in bot emulator:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-emulator-sample.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-emulator-sample.png" border="0" alt="Chat in bot emulator" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the bot is running locally, we can deploy it to Azure right from the Composer. If you go to &lt;STRONG&gt;Publish&lt;/STRONG&gt; from the left menu, you will be able to define &lt;STRONG&gt;Publishing profile&lt;/STRONG&gt; for your bot. Select &lt;STRONG&gt;Define new publishing profile&lt;/STRONG&gt;, and chose one of the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-publishprofile.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-publishprofile.png" border="0" alt="Publishing profiles" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;The most standard way to deploy is to use &lt;STRONG&gt;Azure Web App&lt;/STRONG&gt;. Composer will only require you to provide Azure subscription and resource group name, and it will take care of creating all the required resources (including bot-specific LUIS/QnA Maker instances) automatically. It may take awhile, but it will save you a lot of time and hassle of doing manual deployment.&lt;/P&gt;
&lt;P&gt;Once the bot is published to Azure, you can go to Azure portal and configure &lt;STRONG&gt;Channels&lt;/STRONG&gt; through which you bot would be available to external world, such as Telegram, Microsoft Teams, Slack or Skype.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-azure-channels.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-azure-channels.png" border="0" alt="Add Bot Channels" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1 id="conclusion"&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;Creating a bot using Bot Composer seems like an easy thing to do. In fact, you can create quite powerful bots almost without any code! And you can also hook them to your enterprise endpoint using such features as HTTP REST APIs and OAuth authorization.&lt;/P&gt;
&lt;P&gt;However, there are cases when you need to significantly extend bot functionality using code. In this case, you have several options:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Keep main bot authoring in Bot Composer, and develop &lt;A href="https://docs.microsoft.com/composer/how-to-add-custom-action/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Custom Actions&lt;/A&gt; in C#&lt;/LI&gt;
&lt;LI&gt;Export complete bot code using &lt;STRONG&gt;Custom Runtime&lt;/STRONG&gt; feature of Composer, which exports complete bot code in C# or Javascript, which you can then fully customize. This approach is not ideal, because you will lose the ability to maintain the source of your bot in Composer.&lt;/LI&gt;
&lt;LI&gt;Write a bot from the beginning in one of the supported languages (C#, JS, Python or Java) using &lt;A href="https://dev.botframework.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Bot Framework&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;&lt;FONT size="4"&gt;If you want to explore how the same Educational bot for Geography can be written in C#, check out this Microsoft Learn Module: &lt;A href="https://docs.microsoft.com/learn/modules/responsible-bots/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create a chat bot to help students learn with Azure Bot Service&lt;/A&gt;.&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;I am sure conversational approach to UI can prove useful in many cases, and Microsoft Conversational Platform offers you wide variety of tools to support all your scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Feb 2021 18:23:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/hello-bot-conversational-ai-on-microsoft-platform/ba-p/2139570</guid>
      <dc:creator>shwars</dc:creator>
      <dc:date>2021-02-18T18:23:54Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2142056#M169</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When is available&amp;nbsp; for ( west Europe ) Portugal ?&lt;/P&gt;</description>
      <pubDate>Wed, 17 Feb 2021 07:37:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2142056#M169</guid>
      <dc:creator>Dan Nite</dc:creator>
      <dc:date>2021-02-17T07:37:11Z</dc:date>
    </item>
    <item>
      <title>Re: Integrating AI: Best Practices and Resources to Get Started</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/bc-p/2137140#M165</link>
      <description>&lt;P&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Feb 2021 19:33:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/bc-p/2137140#M165</guid>
      <dc:creator>Luigi Bruno</dc:creator>
      <dc:date>2021-02-15T19:33:27Z</dc:date>
    </item>
    <item>
      <title>Computer Vision Read (OCR) API previews 73 human languages and new features on cloud and on-premise</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-73-human-languages-and-new/ba-p/2121341</link>
      <description>&lt;H1&gt;Overview&lt;/H1&gt;
&lt;P&gt;Businesses today are applying Optical Character Recognition (OCR) and document AI technologies to rapidly convert their large troves of documents and images into actionable insights. These insights power robotic process automation (RPA), knowledge mining, and industry-specific solutions. However, there are several challenges to successfully implementing these scenarios at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;The challenge&lt;/H1&gt;
&lt;P&gt;Your customers are global, and their content is global so your systems should also speak and read international languages. Nothing is more frustrating than not reaching your global customers due to lack of support for their native languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Secondly, your documents are large, with potentially hundreds and even thousands of pages. To complicate things, they have print and handwritten style text mixed into the same documents. To make matters worse, they have multiple languages in the same document, possibly even in the same line.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thirdly, you are a business that’s trusted by your customers to protect their data and information. If your customers are in industries such as healthcare, insurance, banking, and finance, you have stringent data privacy and security needs. You need the flexibility to deploy your solutions on the world’s most trusted cloud or on-premise within your environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, you should not have to choose between world-class AI quality, world languages support, and deployment on cloud or on-premise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Computer Vision OCR (Read API)&lt;/H1&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt;Microsoft’s Computer Vision OCR (Read)&lt;/A&gt; technology is available as a Cognitive Services Cloud API and as Docker containers. Customers use it in diverse scenarios on the cloud and within their networks to help automate image and document processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/TX7XwwIG5lw" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/TX7XwwIG5lw/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What’s New&lt;/H1&gt;
&lt;P&gt;We are announcing Computer Vision's &lt;A href="https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005" target="_blank" rel="noopener"&gt;Read API v3.2 public preview&lt;/A&gt; as a cloud service and Docker container. It includes the following updates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support#optical-character-recognition-ocr" target="_blank" rel="noopener" data-linktype="relative-path"&gt;OCR for 73 languages&lt;/A&gt;&amp;nbsp;including Simplified and Traditional Chinese, Japanese, Korean, and several Latin languages.&lt;/LI&gt;
&lt;LI&gt;Natural reading order for the text line output.&lt;/LI&gt;
&lt;LI&gt;Handwriting style classification for text lines.&lt;/LI&gt;
&lt;LI&gt;Text extraction for selected pages for a multi-page document.&lt;/LI&gt;
&lt;LI&gt;Available as a&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers?tabs=version-3-2" target="_blank" rel="noopener" data-linktype="relative-path"&gt;Distroless container&lt;/A&gt;&amp;nbsp;for on-premise deployment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;First wave of language expansion&lt;/H1&gt;
&lt;P&gt;With the latest Read preview version, we are announcing &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support" target="_blank" rel="noopener"&gt;OCR support for 73 languages&lt;/A&gt;, including Chinese Simplified, Chinese Traditional, Japanese, Korean, and several Latin languages, a 10x increase from the Read 3.1 GA version.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks to Read’s universal model, to extract the text in these languages, use the Read API call without the optional language parameter. We recommend not using the language parameter if you are unsure of the language of the input document or image at run time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The latest Read preview supports the following languages:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Read3.2-Preview-Languages.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254311iAC5BDA5C8AD7426E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Read3.2-Preview-Languages.png" alt="Read 3.2 Preview Languages" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Read 3.2 Preview Languages&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, once you have &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?tabs=visual-studio&amp;amp;pivots=programming-language-rest-api#prerequisites" target="_blank" rel="noopener"&gt;created a Computer Vision resource&lt;/A&gt;, the following curl code will call the Read 3.2 preview with the sample image.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make the following changes in the command where needed:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Replace the value of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;&amp;lt;subscriptionKey&amp;gt;&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;with your subscription key.&lt;/LI&gt;
&lt;LI&gt;Replace the first part of the request URL (&lt;CODE&gt;westcentralus&lt;/CODE&gt;) with the text in your own endpoint URL.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation. The URL expires in 48 hours.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 class="lia-align-left"&gt;Natural reading order output (Latin languages)&lt;/H1&gt;
&lt;P class="lia-align-left"&gt;OCR services typically output text in a certain order in their output. With the new Read preview, choose to get the text lines in the natural reading order instead of the default left to right and top to bottom ordering. Use the new&amp;nbsp;&lt;EM&gt;readingOrder&lt;/EM&gt;&amp;nbsp;query parameter with the “&lt;EM&gt;natural&lt;/EM&gt;”&amp;nbsp;value for a more human-friendly reading order output as shown in the following example.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following visualization of the JSON formatted service response shows the text line order for the same document. Note the first column's text lines output in order before listing the second column and finally the third column.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ocr-read-order-example.png" style="width: 852px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254168i6B2E989758BBAB7A/image-size/large?v=v2&amp;amp;px=999" role="button" title="ocr-read-order-example.png" alt="OCR Read order example" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR Read order example&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;For example, the following curl code sample calls the Read 3.2 preview to analyze the &lt;A href="https://docs.microsoft.com/en-us/microsoft-365-app-certification/media/dec01.png" target="_blank" rel="noopener"&gt;sample newsletter image&lt;/A&gt; and output a natural reading order for the extracted text lines.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze?readingOrder=natural -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://docs.microsoft.com/en-us/microsoft-365-app-certification/media/dec01.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-left"&gt;&lt;SPAN style="color: inherit; font-family: inherit; font-size: 30px;"&gt;Handwriting style classification (Latin languages)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-left"&gt;When you apply OCR on business forms and applications, it’s useful to know which parts of the form has handwritten text in them so that they can be handled differently. For example, comments and the signature areas of agreements typically contain handwritten text. With the latest Read preview, the service will classify Latin languages-only text lines as handwritten style or not along with a confidence score.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;For example, in the following image, you see the appearance object in the JSON response with the style classified as handwriting along with a confidence score.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="sanjeev_jagtap_1-1613027644240.png" style="width: 726px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254124i9123CD1B35534690/image-size/large?v=v2&amp;amp;px=999" role="button" title="sanjeev_jagtap_1-1613027644240.png" alt="OCR handwriting style classification for text lines" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR handwriting style classification for text lines&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;The following code analyzes the &lt;A href="https://intelligentkioskstore.blob.core.windows.net/visionapi/suggestedphotos/2.png" target="_blank" rel="noopener"&gt;sample handwritten image&lt;/A&gt; with the Read 3.2 preview.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://intelligentkioskstore.blob.core.windows.net/visionapi/suggestedphotos/2.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 class="lia-align-left"&gt;Extract text from select pages of a document&lt;/H1&gt;
&lt;P class="lia-align-left"&gt;Many standard business forms have fillable sections followed by long informational sections that are identical between documents, and versions of those documents. At other times, you will be interested in applying OCR to specific pages of interest for business-specific reasons.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;The following curl code sample calls the Read 3.2 preview to analyze the &lt;A href="https://www.annualreports.com/HostedData/AnnualReports/PDF/NASDAQ_MSFT_2019.pdf" target="_blank" rel="noopener"&gt;financial report PDF document&lt;/A&gt; with the &lt;EM&gt;pages&lt;/EM&gt; input parameter set to the page range, "3-5".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze?pages=3-5 -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://www.annualreports.com/HostedData/AnnualReports/PDF/NASDAQ_MSFT_2019.pdf\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following JSON extract shows the resulting OCR output that extracted the text from pages 3, 4, and 5. You should see a similar output for your sample documents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;"readResults": [
      {
        "page": 3,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      },
      {
        "page": 4,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      },
      {
        "page": 5,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      }
]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;On-premise option with Distroless container&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sanjeev_jagtap_3-1613027644248.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254126i55D26341C58C22DE/image-size/small?v=v2&amp;amp;px=200" role="button" title="sanjeev_jagtap_3-1613027644248.png" alt="sanjeev_jagtap_3-1613027644248.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Read 3.2 preview OCR container provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All features from the Read cloud API preview&lt;/LI&gt;
&lt;LI&gt;Distroless container release&lt;/LI&gt;
&lt;LI&gt;Performance and memory enhancements&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;Install and run the Read containers&lt;/A&gt; to get started and find the recommended configuration settings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Get Started&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;Create a Computer Vision resource&lt;/A&gt; in Azure.&lt;/LI&gt;
&lt;LI&gt;Follow our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?tabs=visual-studio&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;SDK and REST API QuickStarts&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt; OCR (Read)&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;See the list of &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support" target="_blank" rel="noopener"&gt;OCR supported languages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;Read containers&lt;/A&gt; and download them from Docker Hub.&lt;/LI&gt;
&lt;LI&gt;Write to us at &lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 15 Mar 2021 00:18:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-73-human-languages-and-new/ba-p/2121341</guid>
      <dc:creator>sanjeev_jagtap</dc:creator>
      <dc:date>2021-03-15T00:18:16Z</dc:date>
    </item>
    <item>
      <title>Integrating AI: Best Practices and Resources to Get Started</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/ba-p/2115408</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We use&amp;nbsp;&lt;/SPAN&gt;&lt;A title="Microsoft AI Documentation" href="https://docs.microsoft.com/ai/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;AI (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;A&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rtificial&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;I&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ntelligence)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;integrated applications daily&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;from search engines&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;optimized&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;find the most relevant content&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;recommendation engines for streaming or shopping.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;During&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;AI’s early years rising to popularity, improving applications with AI&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;was only possible for companies with big budgets dedicated to research and experts&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;preventing&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;companies&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;that&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;cannot&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;effort an AI team to compete.&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;T&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;oday&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AI is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;readily available for any product&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;without having to invest in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;research&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and development.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;There are open-source libraries that can help you train Machine Learning models like&amp;nbsp;&lt;/SPAN&gt;&lt;A title="TensorFlow and Azure Machine Learning" href="https://docs.microsoft.com/azure/machine-learning/how-to-train-tensorflow?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;TensorFlow&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;. With a fraction of the effort and the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cost, &lt;A title="Azure Cognitive Services" href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;pre-trained AI services&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are available to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;easily integrate into your applications&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;with &lt;A title="Custom Vision Rest APIs" href="https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&amp;amp;pivots=programming-language-csharp&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;APIs&lt;/A&gt; and &lt;A title="Custom Vision Web Tool" href="https://www.customvision.ai/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;UI based tools to train custom models&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;for your specific use case.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;In Integrating AI&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;series,&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;I&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;aim to&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;help you decide if and how to integrate AI into your applications,&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;get you started with&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Azure’s ready to use AI solutions, Cognitive&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Services&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;and answer your most&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;frequent questions&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;when getting started.&lt;/SPAN&gt;&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;L&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;et’s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;start with these fundamental questions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;What are the problems you can solve with AI?&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;What do you need to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;know&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;before&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;start&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ing to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;build your solution?&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;How&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;do&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;you measure&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the success of your new AI features&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)" data-ccp-parastyle-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;4893c645-83d4-4448-a154-c57500b90212|46&amp;quot;,&amp;quot;Properties&amp;quot;:[67122396,&amp;quot;&amp;quot;,134224900,&amp;quot;false&amp;quot;,134224901,&amp;quot;false&amp;quot;,134224902,&amp;quot;false&amp;quot;,134233614,&amp;quot;true&amp;quot;,201334293,&amp;quot;17&amp;quot;,201340122,&amp;quot;2&amp;quot;,201341983,&amp;quot;0&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,268442635,&amp;quot;30&amp;quot;,335551500,&amp;quot;12874052&amp;quot;,335551550,&amp;quot;1&amp;quot;,335551620,&amp;quot;1&amp;quot;,335559738,&amp;quot;240&amp;quot;,335559739,&amp;quot;80&amp;quot;,335559740,&amp;quot;259&amp;quot;,335560102,&amp;quot;2&amp;quot;,469769226,&amp;quot;Avenir Next LT Pro,Arial,Calibri&amp;quot;,469775450,&amp;quot;Heading3 (Designer)&amp;quot;,469775498,&amp;quot;BodyText (Designer)&amp;quot;,469777841,&amp;quot;Avenir Next LT Pro&amp;quot;,469777842,&amp;quot;Arial&amp;quot;,469777843,&amp;quot;Calibri&amp;quot;,469777844,&amp;quot;Calibri&amp;quot;,469777929,&amp;quot;Heading3 (Designer) Char&amp;quot;,469778129,&amp;quot;Heading3(Designer)&amp;quot;,469778324,&amp;quot;Normal&amp;quot;],&amp;quot;ClassId&amp;quot;:1073872969}" data-ccp-parastyle-linked-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;4893c645-83d4-4448-a154-c57500b90212|60&amp;quot;,&amp;quot;Properties&amp;quot;:[134224900,&amp;quot;false&amp;quot;,134224901,&amp;quot;false&amp;quot;,134224902,&amp;quot;false&amp;quot;,134231262,&amp;quot;true&amp;quot;,134233614,&amp;quot;true&amp;quot;,201334293,&amp;quot;17&amp;quot;,201340122,&amp;quot;1&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,268442635,&amp;quot;30&amp;quot;,335551500,&amp;quot;12874052&amp;quot;,469769226,&amp;quot;Avenir Next LT Pro,Arial,Calibri&amp;quot;,469775450,&amp;quot;Heading3 (Designer) Char&amp;quot;,469777841,&amp;quot;Avenir Next LT Pro&amp;quot;,469777842,&amp;quot;Arial&amp;quot;,469777843,&amp;quot;Calibri&amp;quot;,469777844,&amp;quot;Calibri&amp;quot;,469777929,&amp;quot;Heading3 (Designer)&amp;quot;,469778129,&amp;quot;Heading3(Designer)Char&amp;quot;,469778324,&amp;quot;Default Paragraph Font&amp;quot;],&amp;quot;ClassId&amp;quot;:1073872969}"&gt;W&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;hat&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;are&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;problems&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;you can solve with AI?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW96081487 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/qJGRd34Hnl0" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/qJGRd34Hnl0/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;AI is a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;groundbreaking&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;technology but not a magical solution for every&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;thing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. It is important to know if you are adding value or&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;solving a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n actual&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;user problem.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;There are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;complex products like Wikipedia and Reddit that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;have&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a lot of information but&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;use&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;crowdsourcing and simple search to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cater to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;unique needs&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;without the help of AI.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;To&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;make a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n informed decision&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, you need to start&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;with&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;your users&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;’&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;needs.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;What are the problems they face&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Is there a process that you can automi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ze&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;like filling&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;expense forms&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that can be automated with Form Recognizer service?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Send voice&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;messages to your customers with updates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;using Speech Services&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Do they make complex choices while using your product tha&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;t&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;could be customized to your users with the use of Personalizer&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Do you need to improve&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the usability&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;of your application with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;voice interactions and Language Understanding&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;It is important to solve a real need for your users instead of assuming the solution th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;at will be useful.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;User research is the best way to figure out the issues and a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;lot&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;can be surfaced by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;user analytics.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can use Metrics Advisor AI service to detect&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;anomalies and&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;figure out future AI solutions&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;as well&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Once you have a clear definition of the problem&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and define how to measure success, it is time to explore&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;practical&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;solutions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;You can read about the &lt;A title="Azure customer stories" href="https://azure.microsoft.com/case-studies/?term=Cognitive+services&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure customer stories&lt;/A&gt; and learn from their methods and design process. For example, read about &lt;A title="BBC's customer story" href="https://customers.microsoft.com/story/754836-bbc-media-entertainment-azure?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;BBC's customer story&lt;/A&gt; before you read about the &lt;A title="BBC Technical Story" href="https://customers.microsoft.com/story/822271-bbc-deploys-beeb-a-custom-voice-assistant-on-azure?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;technical story&lt;/A&gt;&amp;nbsp; of using &lt;A title="Azure Speech Services Documentation" href="https://docs.microsoft.com/azure/cognitive-services/speech-service/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure's Speech&lt;/A&gt;, &lt;A title="Azure Bot Service" href="https://docs.microsoft.com/azure/bot-service/?view=azure-bot-service-4.0&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt; and &lt;A title="Language Understanding Services" href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Language Understanding Services&lt;/A&gt; together to solve the customer needs they identified.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/NwVylAQGQhA" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/NwVylAQGQhA/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Most AI solutions can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;fall into two categories.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The first&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;major use case for AI is automati&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ng the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;mindless&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;repetitive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;jobs&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. If th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;e&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;users&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;of an expense report or a hiring application need to type in i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;nformation from a form or a receipt to your system, it is easily&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;automated&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;by&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;OCR (Optical Character Recognition)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;. Similar automations are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;possible for close captioning, translation,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;classifying images and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;automizing alert messages.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The second category of AI solutions can be categorized as complex human decisions based on data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You could give your friends recommendations&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;on&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;what&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to watch&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;next easily&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, knowing what they like, what they&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;don’t&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;like. For&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;example,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;a streaming serv&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ice with thousands of movies to choose from,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cannot&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;surface&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;relevant&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;content with simple filtering of the genres&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;or release dates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. I&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;t would&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;take forever to choose what to watch&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;by browsing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;unless you know the exact name of the movie. For a decision like recommendation amo&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ng thousands or millions of results, AI&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;might be better at recommending to your best friend,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;maybe even&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;better than you over time.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Understanding&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the language&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and intent of people is another example. A human can understand and classify a review as positive or negative easily. For machines to d&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;etect&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the same&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;emotions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;must&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;do more than&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;detect&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;certain words to get&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;sentiment&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;What do you need to know&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;before&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;start&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ing to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;build your solution?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Some problems are easier to solve than others with AI. Experimenting with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;different&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;tools to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;confirming&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;your solutions is important.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;All&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the Cognitive services are easy to try out and here is how to do that&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;A href="https://aidemos.microsoft.com/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;AI Demos Website&lt;/A&gt; gives you a hand-on experience of Cognitive Services&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CogSerGif.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252682i9F4D0810243AC154/image-size/large?v=v2&amp;amp;px=999" role="button" title="CogSerGif.gif" alt="CogSerGif.gif" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;You can also download the &lt;A title="Intelligent Kiosk app" href="https://www.microsoft.com/p/intelligent-kiosk/9nblggh5qd84?activetab=pivot:overviewtab&amp;amp;WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;Intelligent Kiosk app&lt;/A&gt; to try out the demos on your local machine.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kiosk.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252684i046DA206F60F3EB8/image-size/large?v=v2&amp;amp;px=999" role="button" title="kiosk.png" alt="kiosk.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once you create an Azure resource, you can see the code samples, API call examples and try out the Rest API end points directly on the &lt;A title="Cognitive Services API Reference pages" href="https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-Preview-1/operations/Sentiment?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Cognitive Services API Reference pages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="API.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254086iD180186E4CE9F208/image-size/large?v=v2&amp;amp;px=999" role="button" title="API.png" alt="API descriptions" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;API descriptions&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="codeSamples.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254087i8531724DC81A4630/image-size/large?v=v2&amp;amp;px=999" role="button" title="codeSamples.png" alt="Code Samples" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Code Samples&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="req.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254088iF7CABD739294341F/image-size/large?v=v2&amp;amp;px=999" role="button" title="req.png" alt="Request" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Request&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;Will your users love your solution?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Scaling an application and polishing the user experience takes&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;most of&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the development time. It is better to try out&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;features&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;fast and adjust before making the investment in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;perfecting&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the wrong&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;experience&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You might assume a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n application flow&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that users are going to interact&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;but users can surprise you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;their own&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;creative ways of using your tools&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;P&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rototype&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;applications quickly&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and get&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;user feedback&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;early on.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Power platform is one of the tools that allows you to create mobile apps that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;integrates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;important&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AI capabilities without writing any code. With&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;power&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;platform, you can easily deploy and share your prototype&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, without leaving the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;platform&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;’&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;UI&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;After the free trial period, both training and using your AI models will cost but not as much as the development time of an&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;actual app with AI and having to make major changes after the release.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Check out some of the capabilities and use cases of &lt;A title="AI Builder" href="http://&amp;nbsp;https://docs.microsoft.com/ai-builder/model-types?WT.mc_id=aiml-10397-ayyonet#model-types" target="_blank" rel="noopener"&gt;AI Builder on Power&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="AI Builder" href="http://&amp;nbsp;https://docs.microsoft.com/ai-builder/model-types?WT.mc_id=aiml-10397-ayyonet#model-types" target="_blank" rel="noopener"&gt;Platform&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and how to train a &lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;custom vision model and&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;creating&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;&amp;nbsp;a mobile app on Power Platform in this article&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/VXD5ma2ZExw" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/VXD5ma2ZExw/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;There are other fast and easy options to add AI to your applications,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;without&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a big development investment, especially if you are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;adding the capabilities to an existing application.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can use a logic app to design an application on Azure platform to find twitter mentions of your brand and analyze the sentiment of the tweets.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can visualize the data on Power&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;BI&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;or your choice of visualization platform or tools.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;Once you integrate your AI solution, you can make the new AI features to a limited group of users and compare the effectiveness of your solution with your non-AI features.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Start your learning journey&amp;nbsp;on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="AI Developer resources" href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;AI Developer resources&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and sign up for a &lt;A title="Free Azure Account" href="https://azure.microsoft.com/en-us/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;free Azure Account&lt;/A&gt;&lt;A title="Free Azure Account" href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet&amp;nbsp;" target="_blank" rel="noopener"&gt;.&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Let us know the problems you are trying to solve and your specific use cases&amp;nbsp;on&amp;nbsp;the comments below.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Feb 2021 00:33:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/ba-p/2115408</guid>
      <dc:creator>Yonet</dc:creator>
      <dc:date>2021-02-11T00:33:22Z</dc:date>
    </item>
    <item>
      <title>Accelerate search index development with Visual Studio Code</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-search-index-development-with-visual-studio-code/ba-p/2120941</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/search/search-what-is-azure-search" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; provides developers with APIs and tools to make it easy to add a great search experience to your application. There are tools available in the&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/search/search-import-data-portal" target="_blank" rel="noopener"&gt;portal&lt;/A&gt; to import data into a search index and &lt;A href="https://docs.microsoft.com/azure/search/search-get-started-dotnet" target="_self"&gt;SDKs&lt;/A&gt; to simplify the integration of search functionality into your code base.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, sometimes you need something in between: simpler than code, but more powerful than the portal. In these cases, it’s common to interact directly with the REST APIs to quickly update an indexer, add a document, or perform other standard tasks. Tools like &lt;A href="https://www.postman.com/" target="_blank" rel="noopener"&gt;Postman&lt;/A&gt; are great for this but building out API calls from scratch can quickly become tedious. You wouldn’t write an API call from scratch to add a document to Azure Storage—you’d use &lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_self"&gt;Azure Storage Explorer&lt;/A&gt;—and we don’t want you to have to do that for search either.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this in mind, we created the &lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch" target="_blank" rel="noopener"&gt;Visual Studio Code Extension for Azure Cognitive Search (Preview)&lt;/A&gt;. The&amp;nbsp;Visual Studio Code extension makes it easy to work with your search service using the full capabilities of the REST APIs while providing rich IntelliSense and snippets to help you. With the extension, you can create and update indexes and other components, add documents, search, and more. You’ll never need to struggle with remembering the correct syntax again.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Extension Functionality&lt;/H2&gt;
&lt;P&gt;The extension covers all the major REST API operations for Cognitive Search. Check out the examples below to see some of what’s possible and feel free to request additional functionality &lt;A href="https://github.com/microsoft/vscode-azurecognitivesearch/issues" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Browse all your Azure Cognitive Search services&lt;/H3&gt;
&lt;P&gt;Get access to all your search services in one place. You can quickly see all your indexes, indexers, and other components.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="overview.png" style="width: 595px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254023iFA5C51B05681CE1C/image-dimensions/595x419?v=v2" width="595" height="419" role="button" title="overview.png" alt="overview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create new indexes, indexers, data sources, skillsets, and synonym maps&lt;/H3&gt;
&lt;P&gt;You can create a new index or other component just by editing the JSON and saving the file. You can then read, update, or delete these components at any time.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-index.gif" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254020i58B58097B481768E/image-size/large?v=v2&amp;amp;px=999" role="button" title="create-index.gif" alt="create-index.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Take advantage of rich IntelliSense&lt;/H3&gt;
&lt;P&gt;The extension also includes IntelliSense to guide you as you’re building out your JSON. Instead of referencing external docs each time, you can see what parameters exist and what their allowed values are as you type.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="intellisense.gif" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254019i5F9CDA873D952292/image-size/large?v=v2&amp;amp;px=999" role="button" title="intellisense.gif" alt="intellisense.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;In addition to IntelliSense, the extension provides snippets or templates for building more complex objects, such as data sources and skillsets, so that you have a good starting point.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Add or update documents in the search index&lt;/H3&gt;
&lt;P&gt;Adding or updating documents is something that’s not possible in the portal today. With the extension, you can quickly add a document, and it will even save you some time by creating a JSON template for you based on your index definition.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-document-2.png" style="width: 940px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254021iF6A986F5150A5099/image-size/large?v=v2&amp;amp;px=999" role="button" title="create-document-2.png" alt="create-document-2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;You can view or update existing documents too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Query your search indexes&lt;/H3&gt;
&lt;P&gt;Finally, once you’ve added documents to your search service, you can also query from within the extension and view the results side by side. You can even add multiple queries or save the queries to a file to refer to them later.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="all-searches.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254022i75C787669C9DBA76/image-size/large?v=v2&amp;amp;px=999" role="button" title="all-searches.png" alt="all-searches.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Key use cases&lt;/H2&gt;
&lt;P&gt;If your security requirements mandate the use of &lt;A href="https://docs.microsoft.com/azure/search/service-create-private-endpoint" target="_blank" rel="noopener"&gt;Private Endpoints&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/azure/search/service-configure-firewall" target="_blank" rel="noopener"&gt;IP Firewalls&lt;/A&gt;, you’ll find that some functionality is no longer available in the portal. For these cases, the extension is a great alternative to the portal for interacting with your indexes and the other components of your search service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In other cases, if you find yourself constantly recreating indexes or making small tweaks to them or other search components, the extension can make it incredibly easy to make small updates such as adding a field to an index.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;P&gt;Regardless of how you’re trying to use Cognitive Search, this extension will likely make your life easier. To get started today, &lt;A href="https://aka.ms/vscode-search" target="_blank" rel="noopener"&gt;download the extension,&lt;/A&gt; and follow the related&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/search/search-get-started-vs-code" target="_self"&gt;quickstart&lt;/A&gt;. You’ll see just how quickly and easily you can get up and running with Cognitive Search using the Visual Studio Code Extension.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you run into any issues or have any questions, please feel free to reach out to us at &lt;A href="mailto:azuresearch_contact@microsoft.com" target="_blank" rel="noopener"&gt;azuresearch_contact@microsoft.com&lt;/A&gt; or raise an issue on the extension’s &lt;A href="https://github.com/microsoft/vscode-azurecognitivesearch/issues" target="_blank" rel="noopener"&gt;GitHub repo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Wed, 10 Feb 2021 21:34:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-search-index-development-with-visual-studio-code/ba-p/2120941</guid>
      <dc:creator>DerekLegenzoff</dc:creator>
      <dc:date>2021-02-10T21:34:00Z</dc:date>
    </item>
    <item>
      <title>Re: Build a natural custom voice for your brand</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/bc-p/2120754#M162</link>
      <description>&lt;P&gt;Tried the ACC tool. Lots of Fun. LOL. Thrilled to use it shortly :)&lt;/img&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Feb 2021 18:31:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/bc-p/2120754#M162</guid>
      <dc:creator>anandmicrosoft</dc:creator>
      <dc:date>2021-02-10T18:31:28Z</dc:date>
    </item>
    <item>
      <title>How to use Cognitive Services and containers</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-use-cognitive-services-and-containers/ba-p/2113684</link>
      <description>&lt;P&gt;In this blog we are going to take a look at how we can run a selection of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_blank" rel="nofollow noopener"&gt;Cognitive Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in Docker compatible containers. This option of using these services can come in handy if you run into scenarios where your application can not connect to the cloud all the time or if you need more control over your data.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=Kg4nKWDo6OQ" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/Kg4nKWDo6OQ/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;A id="user-content-what-are-cognitive-services" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#what-are-cognitive-services" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;What are Cognitive Services&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Azure Cognitive Services are cloud-based services that expose AI models through a REST API. These services enable you to add cognitive features, like object detection and speech recognition to your applications without having data science skills. By using the provided SDKs in the programming language of your choice you can create application that can see (Computer Vision), hear (Speech), speak (Speech), understand (Language), and even make decisions (Decision).&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;&lt;A id="user-content-cognitive-services-in-containers" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#cognitive-services-in-containers" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Cognitive Services in containers&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Azure Cognitive Service in containers gives developers the flexibility in where to deploy and host the services that come with Docker containers and keeping the same API experience as when they where hosted in the Azure.&lt;/P&gt;
&lt;P&gt;Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;What are containers&lt;/STRONG&gt;&lt;BR /&gt;Containerization is an approach to software distribution in which an application or service, including its dependencies &amp;amp; configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks, and removed when no longer needed.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=hdfbn4Q8jbo" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/hdfbn4Q8jbo/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;When to use Cognitive Services in containers?&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Running Cognitive Services in containers can be the solution for you if you have specific requirements or constraints making that make it impossible to run Cognitive services in Azure. The most common scenarios include connectivity and control over the data. If you are running Cognitive Services in Azure all the infrastructure is taken care of, running them in containers moves the infrastructure responsibility, like performance and updating the container, to you.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;A case where you choose for container could be, if your connection to Azure is not stable enough. For instance if you have 1000's of document on-prem and you want to run OCR. If you use the Computer Vision OCR endpoint in the cloud you would need to send all the documents to the end point in azure, while if you run the container locally you only need to send the billing information every 15 minutes to Azure.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-features-and-benefits" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#features-and-benefits" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Features and benefits&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Immutable infrastructure:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Enable DevOps teams' to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Control over data:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments – across data, management, identity, and security.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Control over model updates:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Flexibility in versioning and updating of models deployed in their solutions. Portable architecture: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to Azure Kubernetes Service, Azure Container Instances, or to a Kubernetes cluster deployed to Azure Stack. For more information, see Deploy Kubernetes to Azure Stack.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;High throughput / low latency:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Provide customers the ability to scale for high throughput and low latency requirements by enabling Cognitive Services to run physically close to their application logic and data. Containers do not cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scalability:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;With the ever growing popularity of containerization and container orchestration software, such as Kubernetes; scalability is at the forefront of technological advancements. Building on a scalable cluster foundation, application development caters to high availability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Which services are available&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;P&gt;Container support is currently available for a subset of Azure Cognitive Services, including parts of:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Group&lt;/TH&gt;
&lt;TH&gt;Service&lt;/TH&gt;
&lt;TH&gt;Documentation&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;Anomaly Detector&lt;/TD&gt;
&lt;TD&gt;Anomaly Detector&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/anomaly-detector/anomaly-detector-container-howto?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Computer Vision&lt;/TD&gt;
&lt;TD&gt;Read OCR (Optical Character Recognition)&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Spatial Analysis&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Form Recognizer&lt;/TD&gt;
&lt;TD&gt;Form Recognizer&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/form-recognizer-container-howto?" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Language Understanding&lt;/TD&gt;
&lt;TD&gt;Language Understanding&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-container-howto?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Speech&lt;/TD&gt;
&lt;TD&gt;Custom Speech-to-text&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=cstt&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Custom Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ctts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Speech-to-text&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=tts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Neural Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Speech language detection&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=lid&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Text Analytics&lt;/TD&gt;
&lt;TD&gt;Key Phrase Extraction&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=keyphrase&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Text language detection&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=language&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Sentiment analysis&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=sentiment&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Face&lt;/TD&gt;
&lt;TD&gt;Face&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-how-to-install-containers?&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;A id="user-content-how-to-use-cognitive-services-in-containers" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#how-to-use-cognitive-services-in-containers" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;How to use Cognitive Services in containers&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;The use of the services in containers is exactly the same as if you would use them in Azure. The deployment of the container is the part that takes a bit of planning and research. The services are shipped in Docker Containers. This means that the containers can be deployed to any Docker compatible platform. This can be your local machine running Docker Desktop or a fully scalable Kubernetes installation in your on premise data center.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-generic-workflow" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#generic-workflow" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Generic workflow&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Create the resource in Azure&lt;/LI&gt;
&lt;LI&gt;Get the endpoint&lt;/LI&gt;
&lt;LI&gt;Retrieve the API Key&lt;/LI&gt;
&lt;LI&gt;Find the container for the service&lt;/LI&gt;
&lt;LI&gt;Deploy the container&lt;/LI&gt;
&lt;LI&gt;Use the container endpoint as you would use the API resource&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Optional you can mount your own storage and connect&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Application Insights&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;Tutorial: Run a Text to Speech container in an Azure Container Instance.&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;In this tutorial we are going to run a Cognitive Service Speech container in an Azure Container Instance and use the REST API to convert text into speech.&lt;/P&gt;
&lt;P&gt;To run the code below you need an Azure Subscription. if you don’t have an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure subscription&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;you can get $200 credit for the first month. And have the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/cli/azure/what-is-azure-cli?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure command-line interface&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;installed. If you don't have the Azure CLI installed&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;follow this tutorial&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=8KuJKlDSNwA" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/8KuJKlDSNwA/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-1-create-a-resource-group" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#1-create-a-resource-group" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;1. Create a resource group&lt;/H3&gt;
&lt;P&gt;Everything in Azure always start with creating a Resource Group. A resource group is a resource that holds related resources for an Azure solution.&lt;/P&gt;
&lt;P&gt;To create a resource group using the CLI you have to specify 2 parameters, the name of the group and the location where this group is deployed.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az group create --name demo_rg --location westeurope
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;2. Create Cognitive Service resource&lt;/H3&gt;
&lt;P&gt;The next resource that needs to be created is a Cognitive Services. To create this resource we need to specify a few parameters. Besides the name and resource group, you need to specify the kind of cognitive service you want to create. For our tutorial we are creating a 'SpeechServices' service.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account create \
    --name speech-resource \
    --resource-group demo_rg \
    --kind SpeechServices \
    --sku F0 \
    --location westeurope \
    --yes
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-3-get-the-endpoint--api-key" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#3-get-the-endpoint--api-key" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;3. Get the endpoint &amp;amp; API Key&lt;/H3&gt;
&lt;P&gt;If step 1 and 2 are successfully deployed we can extract the properties we need for when we are going to run the container in the next step. The 2 properties we need are the endpoint URL and the API key. The speech service in the container is using these properties to connect to Azure every 15 minutes to send the billing information.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To retrieve endpoint:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account show --name speech-resource --resource-group demo_rg  --query properties.endpoint -o json
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;To retrieve the API keys:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account keys list --name speech-resource --resource-group demo_rg
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-3-deploy-the-container-in-an-aci" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#3-deploy-the-container-in-an-aci" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;3. Deploy the container in an ACI&lt;/H3&gt;
&lt;P&gt;One of the easiest ways to run a container is to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/container-instances/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Container Instances&lt;/A&gt;. With one command in the Azure CLI you can deploy a container and make it accessible for the everyone.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To create an ACI it take a few parameters. If you want your ACI to be accessible from the internet you need to specify the parameter: '--dns-name-label'. The URL for the ACI will look like this: http://{dns-name-label}.{region}.azurecontainer. The dns-name-label property needs to be unique.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az container create 
    --resource-group demo_rg \
    --name speechcontainer \
    --dns-name-label &amp;lt;insert unique name&amp;gt; \
    --memory 2 --cpu 1 \
    --ports 5000 \
    --image mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest \
    --environment-variables \
        Eula=accept 
        Billing=&amp;lt;insert endpoint&amp;gt; 
        ApiKey=&amp;lt;insert apikey&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;The deployment of the container takes a few minutes.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-4-validate-that-a-container-is-running" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#4-validate-that-a-container-is-running" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;4. Validate that a container is running&lt;/H3&gt;
&lt;P&gt;The easiest way to validate if the container is running, is to use a browser and open the container homepage. To do this you first need to retrieve the URL for the container. This can be done using the Azure CLI with the following command.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az container show --name speechcontainer --resource-group demo_rg --query ipAddress.fqdn -o json
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Navigate to the URL on port 5000. The URL should look like this:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;&lt;A href="http://{dns-name-label}.{region}.azurecontainer.io:5000/" target="_blank" rel="noopener"&gt;http://{dns-name-label}.{region}.azurecontainer.io:5000/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;If everything went well you should see a screen like this:&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="container_is_running" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252211iD041E4C90262D356/image-size/medium?v=v2&amp;amp;px=400" role="button" title="container_is_running" alt="container_is_running" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN&gt;5.&amp;nbsp;&lt;/SPAN&gt;Submit your first task&lt;/H3&gt;
&lt;P&gt;The Text to Speech service in the container is a REST endpoint. To use it we would need to create a POST request. There are many ways to do a POST request. For our tutorial we are going to use Visual Studio Code to do this.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Requirements&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Download&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and Install Visual Studio Code&lt;/LI&gt;
&lt;LI&gt;Install a plugin called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://marketplace.visualstudio.com/items?itemName=humao.rest-client&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;REST Client&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;If you have the Visual Studio Code with th REST Client installed create a file call: rest.http and copy past the code below in the file.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;POST http://&amp;lt;dns-name-label&amp;gt;.&amp;lt;region&amp;gt;.azurecontainer.io:5000/speech/synthesize/cognitiveservices/v1  HTTP/1.1
Content-Type: application/ssml+xml
X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm
Accept: audio/*

&amp;lt;speak version="1.0" xml:lang="en-US"&amp;gt;
    &amp;lt;voice name="en-US-AriaRUS"&amp;gt;
        The future we invent is a choice we make. 
        Not something that just happens.
    &amp;lt;/voice&amp;gt;
&amp;lt;/speak&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;Change the name of the URL to the URL of your ACI.&lt;/LI&gt;
&lt;LI&gt;Next click on the Send Request link (just above the URL)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;On the right side of VS Code you should see the response of the API. In the top right corner you see "Save Response Body" click on the button and save the response as a .wav file. Now you can use any media player to play the response.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="vscode_api_response" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252214i2C870D0807E43A24/image-size/large?v=v2&amp;amp;px=999" role="button" title="vscode_api_response" alt="vscode_api_response" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Learn more&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A title="Get started with a free Azure Account" href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="noopener"&gt;Get started with a free Azure Account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678" target="_blank" rel="nofollow noopener"&gt;Get skilled on AI and ML – on your terms with Azure AI&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A title="AI Developer page" href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;AI Developer page&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A title="Watch Azure AI Essentials: Easily add AI to your applications" href="https://www.youtube.com/watch?v=Kg4nKWDo6OQ&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=2" target="_blank" rel="nofollow noopener"&gt;Watch Azure AI Essentials: Easily add AI to your applications&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;A id="user-content-microsoft-learn" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#microsoft-learn" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Microsoft Learn&lt;/H3&gt;
&lt;P&gt;Microsoft Learn is a free, online training platform that provides interactive learning for Microsoft products and more.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;For this blog we have created a custom&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="Collection of Learn Modules" href="https://aka.ms/ai/learn/cognitive-containers" target="_blank" rel="nofollow noopener"&gt;Collection of Learn Modules&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;covering all the topics in depth.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-blogs-and-articles" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#blogs-and-articles" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Blogs and articles&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/getting-started-with-azure-cognitive-services-in-containers/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Getting started with Azure Cognitive Services in containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/bringing-ai-to-the-edge/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Bringing AI to the edge&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/running-cognitive-service-containers/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Running Cognitive Service containers&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-on-microsoft-docs" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#on-microsoft-docs" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;On Microsoft Docs&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers Support&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/container-reuse-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Create containers for reuse&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/azure-container-instance-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Deploy and run container on Azure Container Instance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/azure-kubernetes-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Deploy the Text Analytics language detection container to Azure Kubernetes Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/docker-compose-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Use Docker Compose to deploy multiple containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/container-faq?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers frequently asked questions (FAQ)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Feb 2021 06:43:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-use-cognitive-services-and-containers/ba-p/2113684</guid>
      <dc:creator>hboelman</dc:creator>
      <dc:date>2021-02-11T06:43:42Z</dc:date>
    </item>
    <item>
      <title>Build a natural custom voice for your brand</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice is a &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/text-to-speech" target="_blank" rel="noopener"&gt;Text-to-Speech&lt;/A&gt; (TTS) feature of Speech in Azure Cognitive Services that allows you to create a one-of-a-kind customized synthetic voice for your brand. Since its preview in September 2019, Custom Neural Voice has empowered organizations such as AT&amp;amp;T, Duolingo, Progressive, and Swisscom to develop branded speech solutions that delight users.&amp;nbsp;(For more details, read the &lt;A href="https://aka.ms/AAatzsx" target="_blank" rel="noopener"&gt;Innovation Stories blog&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Custom Neural Voice is now generally available (GA). It is important to note that although Custom Neural Voice is GA from a technological standpoint, interested &lt;A href="http://aka.ms/customneural" target="_blank" rel="noopener"&gt;customers must apply&lt;/A&gt; and be approved to use it. Alternatively, developers can add TTS capabilities to their apps quickly by creating an Azure Speech instance and selecting from over 200 pre-built TTS and Neural TTS voices across 54 languages/locales.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we’ll introduce how Custom Neural Voice works and share best practices in responsibly creating a highly natural brand voice for your apps. &amp;nbsp;If you have questions, join us at our ‘&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/bd-p/AzureAIAMA?ranMID=24542&amp;amp;ranEAID=je6NUbpObpQ&amp;amp;ranSiteID=je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q&amp;amp;epi=je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q&amp;amp;irgwc=1&amp;amp;OCID=AID2000142_aff_7593_1243925&amp;amp;tduid=(ir__z0vjacwkinyoagkyisqgt9flum2xpkxxktxeok6d00)(7593)(1243925)(je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q)()&amp;amp;irclickid=_z0vjacwkinyoagkyisqgt9flum2xpkxxktxeok6d00" target="_blank" rel="noopener"&gt;Ask-Microsoft-Anything&lt;/A&gt;’ on Wednesday, 2/10 at 9AMPT. &lt;A href="https://www.myeventurl.com/Events/Details/203" target="_blank" rel="noopener"&gt;Add to Calendar&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Your voice, your brand&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In a world where voice-based interactions are increasingly becoming the norm, your voice is your brand. A recognizable digital voice helps your customers connect with your brand in new ways.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In recent years we have seen increased interest from a broad range of companies across Media and Entertainment, Telecom, Automobile, Education, and Hospitality, who consider voice-based interactions from a range of devices like phones, speakers, TV/cable boxes, and cars as a key interaction point with their customers. These organizations are looking to have a consistent, branded experience delivered directly to their customers. To highlight one such example, below is an audio sample of the 'Flo' virtual chatbot from Progressive.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Voice Sample: 'Flo' from Progressive&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/flo_sample.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice empowers people and organizations in many ways. The following scenarios are examples of use cases where customers find Custom Neural Voice particularly useful and valuable:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Customer Service Chatbots&lt;/STRONG&gt; – Companies can automate their call center operation with conversational AI to answer calls from customers with a natural-sounding voice that conveys friendliness, empathy, and professionalism and other values that are important to companies. For example, Progressive is using Custom Neural Voice to enable their virtual version of Flo to help their customers with ‘everything from getting a free car insurance to general insurance questions’. &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Read the full story&lt;/A&gt;.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Voice Assistants&lt;/STRONG&gt; – Companies developing smart assistants on appliances, cars, and homes can use Custom Neural Voice to create a unique synthetic voice that conveys the brand of the company, the persona of the assistant and a speaking style that enables the best experience for their target users. With Custom Neural Voice, Swisscom was able to create a multilingual voice assistant that sound human and unique to Swisscom and resonates with its audience. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Read the full story&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Online Learning&lt;/STRONG&gt; – Education providers can add speech to their learning material with a voice that is suitable for the subjects and the students, thereby improving the engagement of the students and the effectiveness of the learning. Duolingo is using the Custom Neural Voice capability to develop stylized voices for their virtual characters for their online learning experience. &lt;A href="https://youtu.be/m-3-D7S0piw?t=672" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Audio Books&lt;/STRONG&gt; – Content publishers can turn written content into audio that is spoken with a synthetic voice to make it more accessible to the global audience. With Custom Neural Voice, the content publishers can create one or more unique voices with natural reading styles that match the subject and context of the content as well as the preference of the listeners. The Beijing Hongdandan Visually Impaired Service Center is using the Custom Neural Voice capability to produce audiobooks based on the voice of Lina, a trainer at the organization who is familiar to the people who are blind in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Assistive Technology and Real-time Translations&lt;/STRONG&gt; – Custom Neural Voice can be used in situations to assist people in need or improve accessibility.&amp;nbsp; When used as an assistive technology, people with speech impairment could use the technology to enable them to communicate with others with a voice that sounds like them. Custom Neural Voice can used in other situations such as real-time translation allowing people to communicate with others in a foreign language in a familiar voice.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Public Service Announcement&lt;/STRONG&gt; – Public service organizations can use Custom Neural Voice to create a voice that is suitable for public announcements, whether it is in an airport, a train terminal, or other venues. The use of synthetic voice provides the ability to generate announcements with dynamic content that cannot be recorded ahead of time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Benefit of Custom Neural Voice&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Traditionally, TTS requires a large volume of voice data—in the range of 10,000 lines or more—to produce a fluent voice model. Consequently, TTS models with fewer recorded lines tend to sound noticeably robotic.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the innovation of deep neural networks and a powerful base model built with speech data from many different speakers, Neural TTS can 'learn' the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Empowered with this technology, Custom Neural Voice enables users to build highly-realistic voices with just a small number of training audios. This new technology allows companies to spend a tenth of the effort traditionally needed to prepare training data while at the same time significantly increasing the naturalness of the synthetic speech output when compared to traditional training methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Listen to the samples created with Custom Neural Voice below. Or try more demos on the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Language &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;&lt;STRONG&gt;Human &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;&lt;STRONG&gt;TTS (Custom Neural Voice)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Lina (Hongdandan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lina-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lina-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;English (Australia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Thomas&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Thomas-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Thomas-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Angela&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Angela-happy-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Angela-happy-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;French (France)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Zoe (Swisscom)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Zoe-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Zoe-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;German (Germany)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Lara (Swisscom)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lara-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lara-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc62633372"&gt;&lt;/A&gt;How it works&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice is based on Neural TTS technology that creates a natural-sounding voice. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally in a natural way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The underlying Neural TTS technology used for Custom Neural Voice consists of three major components: &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187" target="_blank" rel="noopener"&gt;Text Analyzer&lt;/A&gt;, &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;Neural Acoustic Model&lt;/A&gt;, and &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;Neural Vocoder&lt;/A&gt;. To generate natural synthetic speech from text, the text is first input into Text Analyzer, which provides output in the form of phoneme sequence. A phoneme is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text. Then the phoneme sequence goes into the Neural Acoustic Model to predict acoustic features that define speech signals, such as the timbre, speaking style, speed, intonations, and stress patterns, etc. Finally, the Neural Vocoder converts the acoustic features into audible waves so that synthetic speech is generated.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS voice models are trained using deep neural networks based on real voice recording samples. With the customization capability of Custom Neural Voice, you can adapt the Neural TTS engine to better fit your user scenarios. To create a custom neural voice, visit the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt; to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Depending on the use case, Custom Neural Voice can be used to convert text into speech in real-time (e.g., used in a smart virtual assistant) or generate audio content offline (e.g., used as in audiobook or instructions in e-learning applications) with the text input provided by the user.&amp;nbsp; This is made available &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech" target="_blank" rel="noopener"&gt;through REST APIs&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Speech SDK&lt;/A&gt;&lt;SPAN&gt;,&lt;/SPAN&gt; or a &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;no-code Audio Content Creation tool&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Building a Custom Neural Voice&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of Microsoft’s commitment to responsible AI, we are designing and releasing Custom Neural Voice with the intention of protecting the rights of individuals and society, fostering transparent human-computer interaction, and counteract the proliferation of harmful deepfakes and misleading content. For this reason, we have limited the access and use of Custom Neural Voice. &lt;A href="http://aka.ms/customneural" target="_blank" rel="noopener"&gt;Submit an intake form here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft requires every customer to obtain explicit written permission from the voice talent before creating a voice model (see &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Disclosure for Voice Talent&lt;/A&gt;). In addition, you must not use custom neural voice for certain prohibited use cases (see &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Code of Conduct&lt;/A&gt;) and must disclose the synthetic nature of the service to your users upon deployment of the custom voice model (see &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/concepts-disclosure-guidelines" target="_blank" rel="noopener"&gt;Disclosure Guidelines&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When preparing your recording script, make sure you include the following sentence to acquire the voice talent’s acknowledgement of using their voice data to create a TTS voice model and generate synthetic speech.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;“I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;As a technical safeguard intended to prevent misuse of Custom Neural Voice services, Microsoft will use this recording to verify that the voice talent’s voice in the script matches the voice provided in the training data through the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speaker-recognition-overview#speaker-verification" target="_blank" rel="noopener"&gt;Speaker Verification&lt;/A&gt; technology. Read more about this process in the &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Data and Privacy document&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the video below, we introduce how to use the Speech Studio to create a highly natural voice with your own data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/di3vKMhyLaY" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/di3vKMhyLaY/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Creating a great custom voice requires careful quality control in each step, from voice design, data preparation, to the deployment of the voice model to your system. &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;This docs page&lt;/A&gt; outlines in more detail the characteristics, limitations and the best practices in designing and building a custom neural voice. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below are some key steps to take when creating a custom neural voice for your organization. (Note: this presumes you have applied and have been approved for use of Custom Neural Voice.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 1: Persona design&lt;/H4&gt;
&lt;P&gt;First, design a persona of the voice that represents your brand using a persona brief document that defines elements such as the features of the voice, and the character behind the voice. This will help to guide the process of creating a custom voice model, including defining the scripts, selecting your voice talent, training and voice tuning.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 2: Script selection&lt;/H4&gt;
&lt;P&gt;Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you are creating a customer service bot. Include different sentence types in your scripts, including statements, questions, exclamations, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 3: Preparing training data&lt;/H4&gt;
&lt;P&gt;We recommend that the audio recordings be captured in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model heavily depends on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Common issues with recordings include speaking style mismatch (e.g., not in an ‘excited’ manner that you want to the voice to be), unnatural speed, unstable breaks, wrong pronunciation on words, etc. It is recommended that you work with a voice director to control the recording quality. Follow the &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/record-custom-voice-samples" target="_blank" rel="noopener"&gt;recording guidance here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the recordings are ready, follow the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice-prepare-data" target="_blank" rel="noopener"&gt;instructions here&lt;/A&gt; to prepare the training data in the right format.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 4: Testing&lt;/H4&gt;
&lt;P&gt;Prepare test scripts for your voice model that cover the different use cases for your apps. It’s recommended that you use scripts within and outside the training dataset so you can test the quality more broadly for different content.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 5: Tuning and adjustment&lt;/H4&gt;
&lt;P&gt;The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, several adjustments can be made using &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;SSML (Speech Synthesis Markup Language)&lt;/A&gt; when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the TTS service to convert text into audio. The adjustments include change of pitch, rate, intonation, and pronunciation correction.&amp;nbsp; If the voice model is built with multiple styles, SSML can also be used to switch the styles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All of the SSML markups mentioned above can be passed directly to the API.&amp;nbsp; We also provide an online tool, &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;Audio Content Creation&lt;/A&gt;, that allows customers to fine-tune their audio output using a friendly UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Interested in building a custom neural voice? Check the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#customization" target="_blank" rel="noopener"&gt;languages&lt;/A&gt; supported. Sign up to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview#create-the-azure-resource" target="_blank" rel="noopener"&gt;Speech service on Azure&lt;/A&gt; and get started on the&amp;nbsp;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Besides the capability to customize TTS voice models, Microsoft offers over 200 neural and standard voices covering 54 languages and locales. With these Text-to-Speech voices, you can quickly add read-aloud functionality for a more accessible app design, or give a voice to chatbots to provide a richer conversational experiences to your users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;For more information:&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/AMA-SpeechCNV" target="_blank" rel="noopener"&gt;Join us&lt;/A&gt; during our ‘Ask Microsoft Anything’ on Wed., Feb. 10&lt;SUP&gt;th&lt;/SUP&gt; (9amPT) (&lt;A href="https://www.myeventurl.com/Events/Details/203" target="_blank" rel="noopener"&gt;add to Calendar&lt;/A&gt;)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;Add Text-to-Speech to your apps today&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/customneural" target="_blank" rel="noopener"&gt;Apply for access to Custom Neural Voice&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Learn more&lt;/A&gt; about responsible use of Custom Neural Voice&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try our demo&lt;/A&gt; to listen to existing neural voices&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 04 Feb 2021 05:15:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2021-02-04T05:15:47Z</dc:date>
    </item>
    <item>
      <title>QnA with Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;+ Azure Cognitive Search enables instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;over your search results&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Now, you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;do not&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;need to spend time looking through your pile of documents to find the exact answer to your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;query&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;There will be an instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;coming up for the user query from the most relevant documents present in your system.&amp;nbsp; A s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;olution&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;where you can ingest your pile of documents and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;query&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;over&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;them&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to get the answer as well as related relevant documents to get more inform&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;This solution accelerator enables automatic bulk ingestion of documents for QnA processing via a Cognitive Search custom skill.&amp;nbsp; The sample UI showcases the combined experience of instant answers to your questions as well as the list of relevant documents.&amp;nbsp; Finally, the solution is easily deployed using a simple Deploy button, which sets up all necessary services in your Azure subscription&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="B1.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247969iBFE457D1CC9179EE/image-size/large?v=v2&amp;amp;px=999" role="button" title="B1.PNG" alt="B1.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2 aria-level="3"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Benefit&lt;/SPAN&gt;&lt;SPAN&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Converged search experience powering instant answer and relevant documents&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Search using natural language queries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;One-click deployment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Saves end user time during search.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Flexibility to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;enhance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and edit&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;instant answers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The solution combines the power of both Azure Cognitive Search and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Maker to extract question-answer pairs from your documents before storing them in the index.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Once you deploy&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the solution&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, you get a single endpoint where for each end user query both&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the services&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;will be called&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;in parallel&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and you will get a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;combined result&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;with an instant answer powered by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Maker&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;along with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the relevant documents coming from Azure Cognitive Search.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp; Checkout the&amp;nbsp;&lt;A title="Cognitive Search Question Answering Solution Accelerator (github.com)&amp;nbsp;" href="https://github.com/Azure-Samples/search-qna-maker-accelerator" target="_self"&gt;Cognitive Search Question Answering Solution Accelerator (github.com)&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW148685197 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW148685197 BCX8" data-ccp-parastyle="heading 2"&gt;Architecture&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW148685197 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW148685197 BCX8" data-ccp-parastyle="heading 2"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP CommentStart SCXW148685197 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="A1.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247968i1DFD2E013A027938/image-size/large?v=v2&amp;amp;px=999" role="button" title="A1.PNG" alt="A1.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This solution accelerator contains the following artifacts:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;ARM template to set up the solution&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Custom skill in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Azure&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ognitive Search, which ingests the data into&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Maker&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:60,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;User interface to view the results&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:60,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 aria-level="3"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Live Demo Link:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;You can view a live demo of this repo at the following link:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/qnaWithAzureSearchDemo" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://aka.ms/qnaWithAzureSearchDemo&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;File Type Supported:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Currently instant answers will only be available for the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-sources-and-content#file-and-url-data-types" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;file types supported by QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;By default, the logic in the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure Cognitive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Search service&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;indexer also ingests only the following file types: .pdf,.docx,.doc,.xlsx,.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;xls&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,.html,.rtf,.txt,.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;tsv&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. You can change this by modifying the &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;indexedFileNameExtensions&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; property in the &lt;/SPAN&gt;&lt;A href="https://github.com/jennifermarsman/cognitive-search-qna-solution/blob/main/CustomSkillForDataIngestion/QnAIntegrationCustomSkill/Assets/Indexer.json" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Indexer.json&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;Tutorial:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;NOTE: You need to have a&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;GitHub account&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and &lt;A title="Azure subscription" href="https://azure.microsoft.com/en-in/free/search/?&amp;amp;ef_id=CjwKCAiA6aSABhApEiwA6Cbm__JLI8gmtvf_CU83p4p9LOtvNL79avKBUrpDSNLNtOqfHPwrL2xjmBoCMqYQAvD_BwE:G:s&amp;amp;OCID=AID2100054_SEM_CjwKCAiA6aSABhApEiwA6Cbm__JLI8gmtvf_CU83p4p9LOtvNL79avKBUrpDSNLNtOqfHPwrL2xjmBoCMqYQAvD_BwE:G:s&amp;amp;dclid=CjkKEQiA6aSABhDamMvU3YfhmvEBEiQARvYBV-brWGJCeMzy4yHQaETcb2T8oireOC1K7_OlXTvkia7w_wcB" target="_self"&gt;Azure subscription&lt;/A&gt;&amp;nbsp;to try out this solution.&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="3"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Resource creation and deployment:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;lick&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;A title="here to Deploy to Azure." href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fsearch-qna-maker-accelerator%2Fmain%2Fazuredeploy.json" target="_self"&gt;here to Deploy to Azure.&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;This&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;will take you to the create blade where all the information will be&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;pre-filled&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, as shown below. Cl&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ick&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Review+ Create button to proceed.&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T1.PNG" style="width: 805px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247990i1478AD64E6B80255/image-size/large?v=v2&amp;amp;px=999" role="button" title="T1.PNG" alt="T1.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;Your deployment process will take 4-5 minutes to complete. Once completed you will land&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW127374062 BCX8"&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;&amp;nbsp;up&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;&amp;nbsp;on the following page&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW127374062 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T2.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247971i9A250164A9FCF28E/image-size/large?v=v2&amp;amp;px=999" role="button" title="T2.PNG" alt="T2.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW208895423 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW208895423 BCX8"&gt;Click on Deployment details to check all the resources that have been created.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW208895423 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T8.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247993i7624A7F83AC8AADA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T8.PNG" alt="T8.PNG" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW20611999 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW20611999 BCX8" data-ccp-parastyle="heading 3"&gt;Initialization:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW20611999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;To initialize the solution, c&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;lick on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;Output&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;s”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the left&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;C&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;opy the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;http trigger&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;to initialize&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;accelerator" value.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;O&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;pen&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;a new browser tab and paste th&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;is&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;URL into the browser. This will run for about a minute&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then you'll see a message indicating success or failure.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW106106251 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T4.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247974iCC761AD4821A73D0/image-size/large?v=v2&amp;amp;px=999" role="button" title="T4.PNG" alt="T4.PNG" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="TextRun SCXW224694491 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW224694491 BCX8"&gt;If the initialization is successful, then following message will appear:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW224694491 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T5.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247973iC120841094F795B6/image-size/large?v=v2&amp;amp;px=999" role="button" title="T5.PNG" alt="T5.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW224694491 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;Once&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the resources are initialized, you can access the portal through the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW158062791 BCX8"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;UI portal link&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW158062791 BCX8"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;”&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;val&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;ue&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the Output tab.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW158062791 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T6.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247975iF776FE75A4D974DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T6.PNG" alt="T6.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN class="TextRun SCXW108720999 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW108720999 BCX8" data-ccp-parastyle="heading 3"&gt;Upload Documents:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;You can upload the documents one by one through the UI portal, by going&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW10568960 BCX8"&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;to&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the Upload tab.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T7.PNG" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247976i064F83000CD342E8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="T7.PNG" alt="T7.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;You can also upload the documents in bulk, through&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;a&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;container.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL class="lia-list-style-type-disc"&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Go to your storage account.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T3.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247972i0877C45B7CC56167/image-size/large?v=v2&amp;amp;px=999" role="button" title="T3.PNG" alt="T3.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW206337952 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW206337952 BCX8"&gt;Click on Containers and select&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW206337952 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SpellingErrorV2 SCXW206337952 BCX8"&gt;qna&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW206337952 BCX8"&gt;-container to upload the documents in bulk.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW206337952 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T9.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247977iC8FD7AF9E44693DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T9.PNG" alt="T9.PNG" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T10.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247978i179188CB6221FCC2/image-size/large?v=v2&amp;amp;px=999" role="button" title="T10.PNG" alt="T10.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW206337952 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW120755345 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW120755345 BCX8"&gt;Use the Upload tab and select the multiple files you want to ingest. It will take some time to index the documents and to extract the Question Answer pairs out of the documents.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW120755345 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T11.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247981i2F1755C50C6172E7/image-size/large?v=v2&amp;amp;px=999" role="button" title="T11.PNG" alt="T11.PNG" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW73896261 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW73896261 BCX8" data-ccp-parastyle="heading 3"&gt;Question Answer Enhancement:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;Once the ingestion is complete, you can view all the Question Answer pairs extracted from the documents by&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;clicking on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW12433349 BCX8"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;Knowledge Base&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW12433349 BCX8"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW12433349 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T12.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247980iECB7617F408F35CF/image-size/large?v=v2&amp;amp;px=999" role="button" title="T12.PNG" alt="T12.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW12433349 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Play with your knowledge base&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;!&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&amp;nbsp;Y&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SpellingErrorV2 SCXW244092604 BCX8"&gt;ou&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can also test&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;for different queries using the Test Pane. Once you are satisfied with the experience, click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Save and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;T&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;rain&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Publish&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the changes to get these changes&amp;nbsp;reflected on your main portal.&lt;SPAN class="EOP SCXW244092604 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T13.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247982i003DA618BB6ED3AE/image-size/large?v=v2&amp;amp;px=999" role="button" title="T13.PNG" alt="T13.PNG" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN data-contrast="auto"&gt;This solution has been specifically created for our customers to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;address&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;long-term standing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ask&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;retrieve&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;an&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;from the relevant document&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. This solution currently covers the basic&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;functionality,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and we will keep adding more features based on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;user interaction and customer’s feedback.&amp;nbsp; Please feel free to drop us a mail at&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;A tabindex="-1" title="mailto:search-qna-solution@microsoft.com" href="mailto:search-qna-solution@microsoft.com" target="_blank" rel="noreferrer noopener"&gt;search-qna-solution@microsoft.com &lt;/A&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN data-contrast="auto"&gt;to provide your valuable feedback.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;Useful&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Links:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;A title="Azure Cognitive Search documentation" href="https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search" target="_self"&gt;Azure Cognitive Search documentation&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/search/search-sku-tier" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Choose a pricing tier - Azure Cognitive Search | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/search/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Pricing - Search | Microsoft Azure&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#types-of-storage-accounts" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Storage account overview - Azure Storage | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/app-service/windows/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;App Service Pricing | Microsoft Azure&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;App Service plans - Azure App Service | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;QnA Maker documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1#add-a-new-question-and-answer-set" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Add a new Question Answer pair in QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1#test-the-knowledge-base" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Test your Knowledge Base in QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 03 Feb 2021 07:59:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2021-02-03T07:59:57Z</dc:date>
    </item>
    <item>
      <title>Re: How BERT is integrated into Azure automated machine learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/bc-p/2108484#M158</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;EM&gt;with BERT we don’t technically need to train it as it’s pretrained on a large corpus of text.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;It may be worth mentioning to which natural languages this statement holds.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Jan 2021 19:36:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/bc-p/2108484#M158</guid>
      <dc:creator>goroggy</dc:creator>
      <dc:date>2021-01-29T19:36:36Z</dc:date>
    </item>
    <item>
      <title>Unified Neural Text Analyzer: an innovation to improve Neural TTS pronunciation accuracy</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187</link>
      <description>&lt;H1&gt;Introducing Unified Neural Text Analyzer: an innovation for Neural Text-to-Speech pronunciation accuracy improvement &amp;nbsp;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This post is co-authored by Dongxu Han, Junwei Gan and Sheng Zhao&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt;&lt;SPAN&gt; (Neural TTS)&lt;/SPAN&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&amp;nbsp;part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. For example, &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt;, &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Progressive&lt;/A&gt; and &lt;A href="https://aka.ms/MotorolaSolutions" target="_blank" rel="noopener"&gt;Motorola Solutions&lt;/A&gt; are using Azure Neural TTS to develop conversational interfaces for their voice assistants in English speaking locales. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Swisscom&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;Poste Italiane&lt;/A&gt; are adopting neural voices in French, German and Italian to interact with their customers in the European market. &lt;A href="https://customers.azure.cn/hongdandan/index.html" target="_blank" rel="noopener"&gt;Hongdandan&lt;/A&gt;, a non-profit organization, is adopting neural voices in Chinese to make their online library audible for the blind people in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we introduce our latest innovation in the Neural TTS technology that helps to improve the pronunciation accuracy significantly: Unified Neural Text Analyzer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is text analyzer?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS converts plain text into wave form via three modules: neural text analyzer, neural acoustic model and &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;neural vocoder&lt;/A&gt;. Text analyzer converts plain text to pronunciations, acoustic model converts pronunciations to acoustic features and finally vocoder generates waveforms. Text analyzer is the first link of the entire TTS system with results directly affecting the acoustic model and vocoder. The correct pronunciation of a word or phrase is the basic expectation in TTS, which delivers the right information to use but it’s not always easy. For example, “live” should be read different in “We &lt;EM&gt;live&lt;/EM&gt; in a mobile world” and “TV Apps and &lt;EM&gt;live&lt;/EM&gt; streaming offerings from The Weather Network” depending on context. If TTS reads them incorrectly, the intelligibility and naturalness of the content will be significantly influenced. Thus, text analyzer is important to TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Recent updates on Neural TTS include a major innovation to the text analyzer, called “UniTA” (Unified Neural Text Analyzer). UniTA is a unified text analyzer model, which seamlessly simplifies text analyzer workflow and reduces time latency in the runtime server. It adopts a multitask learning approach, jointly training all ambiguity models to solve context ambiguity and generate correct pronunciation and as a result reduces over 50% of pronunciation errors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What are the challenges?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Generally, different natural languages have different linguistic grammar. In TTS, text analyzer needs to follow the same grammar of languages in order to generate correct pronunciations, which contains but isn’t limited to the following required grammar categories:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt; is the process of dividing the written text into meaningful units, such as words. In English and many other languages using some form of the Latin alphabet, the space is a good approximation of a word divider. On the other hand, in languages such as Chinese or Japanese, there is no spacing in sentences. Different word segmentation results may cause different meanings and pronunciations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Part-of-Speech Tagging&lt;/STRONG&gt; is the process of marking up a word in a text as corresponding to a particular part of speech (such as noun, verb, adj, adv and so on), based on both its definition and its context.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt; is the progress of classifying words according to shared inflectional categories such as person (first, second, third), number (singular vs. plural), gender (masculine, feminine, neuter) and case (nominative, oblique, genitive) with a given lexeme.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt; is the process of transforming digits or symbols to their standard format for disambiguation, for example: “$200" would be normalized as "two hundred dollars”, “200M" would be normalized as "two hundred meters” or “two hundred million”.&lt;/LI&gt;
&lt;LI&gt;Similar to Text Normalization, &lt;STRONG&gt;Abbreviation Expansion &lt;/STRONG&gt;is the process of transforming non-standard words to their standard format for disambiguation, for example: “VI" would be normalized as "six”, “St" would be normalized as "Saint” or “street”.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Polyphone Disambiguation&lt;/STRONG&gt; is the process of marking up polyphone word (heteronym word, which has one spelling but has more than one pronunciation and meaning) to its correct pronunciation based on its context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;&lt;STRONG&gt;Example &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;English&lt;/EM&gt;]&lt;BR /&gt;Nice to meet u:) --&amp;gt; Nice / to / meet / u / :)&lt;/img&gt;&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Chinese&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;在圣诞节纽约大都会有演出 --&amp;gt; 在 / 圣诞节 / 纽约 / 大 / 都会(du1 hui4) / 有 / 演出&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Chinese&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;在圣诞节纽约大都会有演出 --&amp;gt; 在/ 圣诞节 / 纽约 / 大都(da4 dou1) / 会 / 有 / 演出&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Part-of-Speech&lt;/P&gt;
&lt;P&gt;Tagging&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Noun, | l ai v s |&lt;/EM&gt;]&lt;BR /&gt;Many people have lost their &lt;STRONG&gt;lives&lt;/STRONG&gt; since the cyclone because aid has not been able to be distributed.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Verb, | l I v s |&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;I also discovered the very angry raccoon that &lt;STRONG&gt;lives&lt;/STRONG&gt; near my porch.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Morphology&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Singular&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;1km --&amp;gt; one kilometer&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Plural&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;5km --&amp;gt; five kilometers&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Text Normalization&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Fraction, nine out of ten&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;The O.S. Speed T1202 ups the ante for race-winning performance, resulting in a power plant that will dominate &lt;STRONG&gt;9/10&lt;/STRONG&gt; scale competition.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Date, September tenth&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;1st episode will air &lt;STRONG&gt;9/10&lt;/STRONG&gt; with never before seen video of her birth!&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Abbreviation Expansion&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Street&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;Oh man, biking from 24th &lt;STRONG&gt;St&lt;/STRONG&gt; BART to the 29th &lt;STRONG&gt;St&lt;/STRONG&gt; bikeshare station, that will be sweet.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Saint&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;We continue to ask anyone who was in the wider area near &lt;STRONG&gt;St&lt;/STRONG&gt; Heliers School between 7.30am and 9am and witnessed any suspicious activity to contact police&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Polyphone Disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;p r ih - z eh 1 n t&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;The prices will &lt;STRONG&gt;present&lt;/STRONG&gt; the estimated discount utilizing the drug discount card.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;p r eh 1 - z ax n t&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;But our &lt;STRONG&gt;present&lt;/STRONG&gt; situation is not a natural one.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Most pronunciations are affected by these categories based on syntactic or semantic context, and these categories are all challenging disambiguation problems. The traditional TTS approach is a pipeline-based module called “text analyzer” with a series of models aimed at solving grammar disambiguation problems, which causes some of the following issues:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Complex model&lt;/STRONG&gt;. Redundant models are built and optimized separately but implemented together in the traditional text analyzer, which causes pipeline long and complicated.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error propagation&lt;/STRONG&gt;. Accumulated errors caused by the models isolated would affect the final results.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High latency&lt;/STRONG&gt;. Models run one by one in the traditional text analyzer which is pipeline-based. Time cost is high in the runtime server. &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Compared to the traditional pipeline-based text analyzers, our Neural TTS proposes a Unified Neural Text Analyzer model (UniTA) to improve TTS pronunciation.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It builds a &lt;STRONG&gt;unified&lt;/STRONG&gt; text analyzer model, which greatly simplifies the text analyzer workflow and reduces time latency in the runtime server.&lt;/LI&gt;
&lt;LI&gt;It adopts a &lt;STRONG&gt;multitask learning approach&lt;/STRONG&gt;, jointly training all ambiguity models to solve context ambiguity and generate the correct pronunciations, reducing pronunciation errors by over 50%.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;How does UniTA improve pronunciations?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Firstly, UniTA converts the input text to word embedding vectors through a pre-trained model. Word embedding is a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from vocabulary are mapped to vectors of real numbers. Conceptually, it involves a mathematical embedding from a space with many dimensions per word to a continuous vector space with a much lower dimension. Pre-training models like &lt;A href="https://www.microsoft.com/en-us/research/blog/a-holistic-representation-toward-integrative-ai/" target="_blank" rel="noopener"&gt;XYZ-Code&lt;/A&gt; have demonstrated unprecedented effectiveness for learning universal language representations based on unlabeled corpus with the method achieving great success in many tasks like language understanding and language generation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Secondly, a sequence tagging fine-tune strategy is adopted in the UniTA model. UniTA is designed as a typical word classification task, in which&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt; predicts word delimiter as word boundary or not.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Part-of-Speech&lt;/STRONG&gt; &lt;STRONG&gt;(POS)&lt;/STRONG&gt; predicts “noun”, “verb”, “adj” and so on to classify word part-of-speech.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt; predicts “singular”, “plural”, “masculine”, “feminine”, “neuter” and so on to classify word number, gender and case.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt; &lt;STRONG&gt;(TN)&lt;/STRONG&gt; predicts candidate digits to “cardinal”, “date”, “time”, “stock” or other TN categories, and then an auxiliary component “TN Rule” helps convert digits to word form based on predicted category.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Abbreviation Expansion&lt;/STRONG&gt; predicts candidate abbreviation word to its expanded form.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Polyphone disambiguation&lt;/STRONG&gt; predicts polyphone words’ pronunciation. An auxiliary component, “Lexicon” is used here for achieving non-polyphone words’ pronunciations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Different from the traditional text analyzer training models , UniTA adopts a multitask learning approach to jointly train all categories together including word segmentation, part-of-speech tagging, morphology, abbreviation expansion, text normalization and polyphone disambiguation. The multitask learning approach shares hidden layers’ information and jointly trains across different tasks, which has achieved state-of-art achievements on many NLP tasks. In UniTA, hidden information is also shared in models when training.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, the sentence “&lt;EM&gt;St. John had a 10-3 run to build its lead to 78-64 with 4:44 left.&lt;/EM&gt;” in the training corpus is annotated as showed in the table below. “--” means there is no related tag in the category. In the word segmentation column, the phrase “10-3” is segmented as “10”, “-” and “3”; in the morphology column, the word “had” is annotated as “past tense”; in the text normalization column, “10-3” belongs to interpreting word “to” instead of “-“ while “4:44” belongs to the pattern using time format; In the abbreviation column, word “St.” is expanded as “Saint” rather than “Street”; and in the polyphone disambiguation column, the word “lead” is pronounced as [l i: d]. Actually, the word “lead” has two pronunciations, it is pronounced as [l i: d] when its POS is noun while pronounced as [l e d] when its POS is verb. This means the POS results and Polyphone results can share the inner information. In this way, multitask model improves UniTA accuracy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="613"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;Word&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;&lt;STRONG&gt;Part-of-Speech&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;&lt;STRONG&gt;Abbreviation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;&lt;STRONG&gt;Polyphone disambiguation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;St.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;Saint&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;John&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;had&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Past tense&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;a&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Det&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;10-3&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;10 / - / 3&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as “ten to three”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;run&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Singular&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;to&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Particle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;build&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;its&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Det&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;lead&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Singular&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;l i: d&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;to&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Particle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;78-64&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;78 / - / 64&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as “seventy-eight to sixty-four”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;with&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Prep&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;4:44&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;4 / : / 44&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as time format&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;left&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Past participle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Symbol&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;UniTA model predicts categories’ results together in the neural TTS runtime service. The same as training, UniTA converts the plain texts to word embeddings and then the multitask sequence tagging model predicts all the categories’ results. Some auxiliary modules are embedded after fine-tuning categories to further improve pronunciations. Finally, the pronunciation results are generated from UniTA.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is the figure of the UniTA model structure in Neural TTS:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="UniTA-Diagram.png" style="width: 747px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249840i3AB483F3C9C9FE60/image-size/large?v=v2&amp;amp;px=999" role="button" title="UniTA-Diagram.png" alt="UniTA model diagram" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;UniTA model diagram&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Pronunciation accuracy improved with UniTA&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Compared with the traditional TTS text analyzer, UniTA reduces over 50% of pronunciation errors in improving pronunciation accuracy. It is already used many neural voice languages such as English (United States), English (United Kingdom), Chinese (Mandarin, simplified), Russian (Russia), German (Germany), Japanese (Japan), Korean (Korea), Polish (Poland) and Finnish (Finland). Due to varying types of grammar in language, not all categories are suitable for every language. For example, Chinese and Japanese heavily depend on word segmentation and polyphone while these languages don’t need morphology or abbreviation expansion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are some samples of the pronunciation improvement using UniTA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;STRONG&gt;Input text&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;(target word bolded)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;STRONG&gt;Previous pronunciation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;STRONG&gt;Current pronunciation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;太子与三殿下行过礼后坐了片刻就离开了。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;“三殿&lt;/SPAN&gt; / &lt;SPAN&gt;下行 &lt;/SPAN&gt;/ &lt;SPAN&gt;过礼”&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-1-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;“三殿下 &lt;/SPAN&gt;/ &lt;SPAN&gt;行过礼”&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-1-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;叶奎最终还是在剧痛下泄了气&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;“&lt;SPAN&gt;剧痛 &lt;/SPAN&gt;/ &lt;SPAN&gt;下泄了气&lt;/SPAN&gt;”&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-2-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;“&lt;SPAN&gt;剧痛下 &lt;/SPAN&gt;/ &lt;SPAN&gt;泄了气&lt;/SPAN&gt;”&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-2-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;German (Germany)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;kulturform&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;kult+urform&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/kulturform.old.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;kultur+form&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/kulturform.new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Korean (Korea)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;해외감염&lt;STRONG&gt;병&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;h&lt;SPAN&gt;̬ɛ&lt;/SPAN&gt;w&lt;SPAN&gt;ɛ&lt;/SPAN&gt;g&lt;SPAN&gt;̥&lt;/SPAN&gt;mj&lt;SPAN&gt;ʌ&lt;/SPAN&gt;m&lt;STRONG&gt;b&lt;/STRONG&gt;j&lt;SPAN&gt;ʌ&lt;/SPAN&gt;ŋ&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ko-kr_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;h̬ɛwɛg̥mjʌm&lt;STRONG&gt;p&lt;/STRONG&gt;jʌŋ&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ko-kr_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Morphology - case ambiguity&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;Количество ударов по воротам&lt;/SPAN&gt; (15 &lt;SPAN&gt;против &lt;/SPAN&gt;&lt;STRONG&gt;7)&lt;/STRONG&gt; &lt;SPAN&gt;также говорит о преимуществе чемпионов мира&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;Семь&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ru-ru_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;Семи &lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ru-ru_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Abbreviation Expansion&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;Joined &lt;STRONG&gt;TX&lt;/STRONG&gt; Army National Guard in 1979.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;T.X.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TX-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Texas&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TX-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Text Normalization&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;The Downtown Cabaret Theatre’s Main Stage Theatre division concludes its &lt;STRONG&gt;2010/11&lt;/STRONG&gt; season with the Tony Award winning musical, in the heights by Lin-Manuel Miranda.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;November 2010&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/date-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;2010 to 2011&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/date-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;卓文君听琴后，理解了琴&lt;STRONG&gt;曲&lt;/STRONG&gt;的含意，不由脸红耳热，心驰神往。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;qu1&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/poli-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;qu3&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/poli-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;I received a copy early in November, and &lt;STRONG&gt;read&lt;/STRONG&gt; and contemplated it's provisions with great satisfaction.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/read-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/read-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Japanese (Japan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;パッケージには、富士屋ホテルが発刊した「We Japanese&lt;SPAN&gt;」&lt;STRONG&gt;内&lt;/STRONG&gt;の説明用の挿絵を採用。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;うち&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;(w u - ch i)&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ja-jp_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;ない&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;(n a - y i)&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ja-jp_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Hear how the Cortana voice pronounces each word accurately with UniTA.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/3ikql0ghLkE" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/3ikql0ghLkE/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Get started&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to continue to power accurate, natural and intuitive voice experiences for customers world-wide. Azure Text-to-Speech service provides more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech" target="_blank" rel="noopener"&gt;200 voices in over 50 languages&lt;/A&gt; for developers all over the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Let us know how you are using or plan to use Neural TTS voices in this&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener noreferrer"&gt;form&lt;/A&gt;&lt;SPAN&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing your experience and developing more compelling services together with you for the developers around the world.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our &lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 28 Jan 2021 09:38:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2021-01-28T09:38:10Z</dc:date>
    </item>
    <item>
      <title>Get skilled on AI and ML – on your terms with Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678</link>
      <description>&lt;P&gt;Azure’s AI portfolio has options for every developer and data scientist, and we’re committed to empowering you to develop applications and machine learning models on your terms. Azure enables you to develop in your preferred language, environment, and machine learning framework, and allows you to deploy anywhere - to the cloud, on-premises, or the edge. We help improve your productivity regardless of your skill level, with code-first and low code/no code options which can help you accelerate the development process. We’re also devoted to empowering you with resources to help you get started with Azure AI and machine learning, grow your skills, and start building impactful solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Announcing new AI &amp;amp; ML resource pages for developers and data scientists&lt;/H2&gt;
&lt;P&gt;Today we’re excited to announce new resources pages on Azure.com, with a rich set of content for &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;data scientists&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;developers&lt;/A&gt;. Whether you’re new to AI and ML, or new to Azure, the videos, tutorials, and other content on these pages will help you get started.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Learn how your peers around the world are using Azure AI to develop AI and machine learning solutions on their terms to solve business challenges.&lt;/LI&gt;
&lt;LI&gt;Grow your skills with curated learning journeys to help your skill up on Azure AI and Machine Learning in 30 days. Each learning journey has videos, tutorials, and hands-on exercises to help prepare you to pass a Microsoft certification in just 4 weeks. Upon completing the learning journey, you’ll be eligible to receive 50% off a Microsoft Certification exam.&lt;/LI&gt;
&lt;LI&gt;Engage with our engineering teams and stay up to date with the latest innovations on our &lt;A href="https://aka.ms/AI_Hub" target="_blank" rel="noopener"&gt;AI Tech Community&lt;/A&gt;, where you’ll find blogs, discussion forums, and more.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-center"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="learn.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/250055i007A7A09B08E9477/image-size/large?v=v2&amp;amp;px=999" role="button" title="learn.jpg" alt="learn.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Pictured above: ML learning journey for developers and data scientists.&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Register for the Azure AI Hackathon&lt;/H2&gt;
&lt;P&gt;Finally, put your skills to the test by entering the &lt;A href="https://aka.ms/AzureAIHackathon" target="_blank" rel="noopener"&gt;Azure AI Hackathon&lt;/A&gt;, which starts today and will run through March 22&lt;SUP&gt;nd&lt;/SUP&gt;, 2021. Winners will be announced in early April. The most innovative and impactful projects will win prizes up to $10,000 USD. We look forward to seeing what you build with Azure AI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started today&lt;/H2&gt;
&lt;P&gt;Check out the pages to get started with your 30-day learning journey, and register for the hackathon:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;AI Developer Resources&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;Data Scientist Resources&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/AzureAIHackathon" target="_blank" rel="noopener"&gt;Azure AI Hackathon&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 27 Jan 2021 21:37:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678</guid>
      <dc:creator>Anand_Raman</dc:creator>
      <dc:date>2021-01-27T21:37:16Z</dc:date>
    </item>
    <item>
      <title>Re: How to build a personal finance app using Azure</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/bc-p/2097517#M155</link>
      <description>&lt;P&gt;&lt;img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@B71AFCCE02F5853FE57A20BD4B04EADD/images/emoticons/cool_40x40.gif" alt=":cool:" title=":cool:" /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 26 Jan 2021 11:00:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/bc-p/2097517#M155</guid>
      <dc:creator>pspaulding1025</dc:creator>
      <dc:date>2021-01-26T11:00:27Z</dc:date>
    </item>
    <item>
      <title>How to build a voice-enabled grocery chatbot with Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-voice-enabled-grocery-chatbot-with-azure-ai/ba-p/2096079</link>
      <description>&lt;P&gt;Chatbots have become increasingly popular in providing useful and engaging experiences for customers and employees. Azure services allow you to quickly create bots, add intelligence to them using AI, and customize them for complex scenarios.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through an exercise which you can complete in under two hours, to get started using Azure AI Services. This intelligent grocery bot app can help you manage your shopping list using voice commands. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="iPhoneview.png" style="width: 201px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249349iBBA75F61572086FD/image-size/medium?v=v2&amp;amp;px=400" role="button" title="iPhoneview.png" alt="iPhoneview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add or delete grocery items by dictating them to Alexa.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Easily access the grocery list through an app.&lt;/LI&gt;
&lt;LI&gt;Check off items using voice commands; for example, “Alexa, remove Apples from my grocery list."&lt;/LI&gt;
&lt;LI&gt;Ask Alexa to read the items you have in your grocery list.&lt;/LI&gt;
&lt;LI&gt;Automatically organize items by category to help save time at the store.&lt;/LI&gt;
&lt;LI&gt;Use any laptop or &lt;A href="https://azure.microsoft.com/en-us/services/app-service/web/" target="_blank" rel="noopener"&gt;Web Apps&lt;/A&gt; to access the app and sync changes across laptop and phone.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/?OCID=AID3024570" target="_blank" rel="noopener"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://ms.portal.azure.com/#home?OCID=AID3024570" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.developer.amazon.com/en-US/alexa" target="_blank" rel="noopener"&gt;Amazon Alexa account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Python 3.6 or above&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Key components:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/bot-service/" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt; to develop bot and publish to Alexa channel.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://dev.botframework.com/" target="_blank" rel="noopener"&gt;Microsoft Bot Framework Emulator&lt;/A&gt; to test and debug bots using Bot Framework SDK.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.developer.amazon.com/en-US/alexa" target="_blank" rel="noopener"&gt;Alexa skills&lt;/A&gt; to interact with the bot using voice commands via Amazon Alexa.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/" target="_blank" rel="noopener"&gt;Language Understanding&lt;/A&gt; to help users interact with the bot with natural language, by enabling the bot to understand user intent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Solution Architecture &lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="App Ref Architecture.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249350i5120341D7F826E08/image-size/medium?v=v2&amp;amp;px=400" role="button" title="App Ref Architecture.png" alt="App Ref Architecture.png" /&gt;&lt;/span&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The user accesses the chatbot by invoking it as an Alexa skill.&lt;/LI&gt;
&lt;LI&gt;User is authenticated with Azure Active Directory.&lt;/LI&gt;
&lt;LI&gt;User interacts with the chatbot powered by Azure Bot Service; for example, user requests bot to add grocery items to a list.&lt;/LI&gt;
&lt;LI&gt;Azure Cognitive Services process the natural language request to understand what the user wants to do. (Note: If you wanted to give your bot its own voice, you can choose from over 200 voices and 54 languages/locales. &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try the demo&lt;/A&gt; to hear the different natural sounding voices.)&lt;/LI&gt;
&lt;LI&gt;The bot adds or removes content in the database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Another visual of the flow of data within the solution architecture is shown below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="App flow.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249351i658B09F9C4B31CCA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="App flow.png" alt="App flow.png" /&gt;&lt;/span&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Implementation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps involved in creating the app along with some sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’ll start by creating an Azure Bot Service instance, and adding speech capabilities to the bot using the Microsoft Bot Framework and the Alexa skill. Bot Framework, along with Azure Bot Service, provides the tools required to build, test, deploy, and manage the end-to-end bot development workflow. In this example, we are integrating Azure Bot Service with Alexa, which can process speech inputs for our voice-based chatbot. However, for chatbots deployed across multiple channels, and for more advanced scenarios, we recommend using Azure’s &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview" target="_blank" rel="noopener"&gt;Speech service&lt;/A&gt; to enable voice-based scenarios. &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try the demo&lt;/A&gt; to listen to the over 200 high quality voices available across 54 languages and locales.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;The first step in the process is to login into Azure portal and follow the steps &lt;A href="https://azure.microsoft.com/en-us/services/bot-service/#pricing" target="_blank" rel="noopener"&gt;here&lt;/A&gt; to create an Azure Bot Service resource and a web app bot. To add voice capability to the bot, click on channels to add Alexa (see the below snapshot) and note the Alexa Service Endpoint URI.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Azure Bot Service Channels.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249353iBED4C224680F9538/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Azure Bot Service Channels.png" alt="Azure Bot Service Channels" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure Bot Service Channels&lt;/span&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Next, we need to log into the Alexa Developer Console and create an Amazon Alexa skill. After creating the skill, we are presented with the interaction model.&amp;nbsp;&lt;SPAN&gt;Replace the JSON Editor with the below example phrases.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{

&amp;nbsp; "interactionModel": {

&amp;nbsp;&amp;nbsp;&amp;nbsp; "languageModel": {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "invocationName": "Get grocery list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "intents": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.FallbackIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.CancelIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&amp;nbsp;&amp;nbsp;&amp;nbsp;

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.HelpIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.StopIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.NavigateHomeIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "Get items in the grocery",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "slots": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "name",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "type": "AMAZON.US_FIRST_NAME"

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ],

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"Get grocery items in the list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "Do I have bread in my list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ]

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ],

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "types": []

&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp; }

}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Next, we’ll integrate the Alexa Skill with our Azure bot. We’ll need two pieces of information to do this: the Alexa Skill ID and the Alexa Service Endpoint URI. First, get the Skill ID either from the URl in the Alexa portal, or by going to the Alexa Developer Console and clicking “view Skill ID”. The skill ID should be a value like ‘amzn1.ask.skil.A GUID’. Then, get the Alexa Service Endpoint URI from the Azure portal, by going to the channels page of our Azure Web App Bot in the Azure portal, and clicking on Alexa to copy the Alexa Service Endpoint URI. Then integrate as shown:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Amazon Developer Console&lt;/STRONG&gt;: After building the Alexa Skill, click on Endpoint and paste the Alexa Service Endpoint URI that we copied from the Azure portal and save the Endpoints.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Amazon Developer Console.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249354i6D082908ABD21583/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Amazon Developer Console.jpg" alt="Amazon Developer Console.jpg" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Portal:&lt;/STRONG&gt; Go to the channels page of the Azure Bot, click on Alexa, and paste the Alexa Skill ID that we copied from the Alexa Developer Console.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Alexa config settings in Azure bot service.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249355i9D0AAD3055C69612/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Alexa config settings in Azure bot service.jpg" alt="Alexa config settings in Azure bot service.jpg" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Now, we’ll download and the bot locally for testing using the &lt;A href="https://dev.botframework.com/" target="_blank" rel="noopener"&gt;Bot Framework Emulator&lt;/A&gt;. Click on “Build” in the Azure Web Bot app to download the source code locally with Bot Framework Emulator. Modify app.py as below:&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

from http import HTTPStatus

from aiohttp import web
from aiohttp.web import Request, Response, json_response
from botbuilder.core import (
    BotFrameworkAdapterSettings,
    ConversationState,
    MemoryStorage,
    UserState,
)
from botbuilder.core.integration import aiohttp_error_middleware
from botbuilder.schema import Activity

from config import DefaultConfig
from dialogs import MainDialog, groceryDialog
from bots import DialogAndWelcomeBot

from adapter_with_error_handler import AdapterWithErrorHandler

CONFIG = DefaultConfig()

# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
SETTINGS = BotFrameworkAdapterSettings(CONFIG.APP_ID, CONFIG.APP_PASSWORD)

# Create MemoryStorage, UserState and ConversationState
MEMORY = MemoryStorage()
USER_STATE = UserState(MEMORY)
CONVERSATION_STATE = ConversationState(MEMORY)

# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
ADAPTER = AdapterWithErrorHandler(SETTINGS, CONVERSATION_STATE)

# Create dialogs and Bot
RECOGNIZER = IntelligentGrocery(CONFIG)
grocery_DIALOG = groceryDialog()
DIALOG = MainDialog(RECOGNIZER, grocery_DIALOG)
BOT = DialogAndWelcomeBot(CONVERSATION_STATE, USER_STATE, DIALOG)

# Listen for incoming requests on /api/messages.
async def messages(req: Request) -&amp;gt; Response:
    # Main bot message handler.
    if "application/json" in req.headers["Content-Type"]:
        body = await req.json()
    else:
        return Response(status=HTTPStatus.UNSUPPORTED_MEDIA_TYPE)

    activity = Activity().deserialize(body)
    auth_header = req.headers["Authorization"] if "Authorization" in req.headers else ""

    response = await ADAPTER.process_activity(activity, auth_header, BOT.on_turn)
    if response:
        return json_response(data=response.body, status=response.status)
    return Response(status=HTTPStatus.OK)

APP = web.Application(middlewares=[aiohttp_error_middleware])
APP.router.add_post("/api/messages", messages)

if __name__ == "__main__":
    try:
        web.run_app(APP, host="localhost", port=CONFIG.PORT)
    except Exception as error:
        raise error
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Next, we’ll run and test the bot with Bot Framework Emulator. From the terminal, navigate to the code folder and run pip install -r requirements.txt to install the required packages to run the bot. Once the packages are installed, run python app.py to start the bot. The bot is ready to test as shown below:&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BF Emulator test.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249358i96872251C2ACD3CA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BF Emulator test.jpg" alt="BF Emulator test.jpg" /&gt;&lt;/span&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="font-family: inherit;"&gt;Open the bot and add the below port number into the following URL.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BF Emulator screenshot.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249359i544E91ECBB7FE80D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BF Emulator screenshot.png" alt="Bot Framework Emulator view" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Bot Framework Emulator view&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Now we’re ready to add natural language understanding so the bot can understand user intent. Here, we’ll use Azure’s Language Understanding Cognitive Service (LUIS), to map user input to an “&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-intent" target="_blank" rel="noopener"&gt;intent&lt;/A&gt;” and extract “&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-entity-types" target="_blank" rel="noopener"&gt;entities&lt;/A&gt;” from the sentence. In the below illustration, the sentence “add milk and eggs to the list” is sent as a text string to the LUIS endpoint. LUIS returns the JSON seen on the right.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS diagram.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249360i41B0A2780827D409/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS diagram.png" alt="Language Understanding utterances diagram" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding utterances diagram&lt;/span&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;Use the below template to create a LUIS JSON model file where we specify intents and entities manually. After the “IntelligentGrocery” app is created in the &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;LUIS portal&lt;/A&gt; under “Import New App”, upload the JSON file with the below intents and entities.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
      "text": "access the groceries list",
      "intent": "Show",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 11,
          "endPos": 19,
          "children": []
        }
      ]
    },
    {
      "text": "add bread to the grocery list",
      "intent": "Add",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 23,
          "endPos": 29,
          "children": []
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The above sample intents are for adding items and accessing the items in the grocery list. Now, it’s your turn to add additional intents to perform the below tasks, using the &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;LUIS portal&lt;/A&gt;. Learn more about how to create the intents &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/get-started-portal-build-app" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Intents&lt;/STRONG&gt;&lt;/P&gt;
&lt;TABLE width="624"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;&lt;STRONG&gt;Name &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;CheckOff&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Mark the grocery items as purchased.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;Confirm&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Confirm the previous action.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;Delete&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Delete items from the grocery list.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the intents and entities are added, we will need to train and publish the model so the LUIS app can recognize utterances pertaining to these grocery list actions.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Portal.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249361i33AF87C4EED5EBA9/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Portal.png" alt="Language Understanding (LUIS) Portal" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding (LUIS) Portal&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;After the model has been published in the LUIS portal, click ‘Access your endpoint Urls’ and copy the primary key, example query and endpoint URL for the prediction resource.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Build endpoint.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249363i329D39E3A396449C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Build endpoint.png" alt="Language Understanding endpoint" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding endpoint&lt;/span&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS prediction resource.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249366i3EFD563A9B93CDBF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS prediction resource.png" alt="Language Understanding (LUIS) Prediction view" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding (LUIS) Prediction view&lt;/span&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Navigate to the Settings page in the LUIS portal to retrieve the App ID.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Settings APP ID.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249367iCEEE2DCF7CCB339A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Settings APP ID.png" alt="Application settings" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Application settings&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL start="9"&gt;
&lt;LI&gt;Finally, test your Language Understanding model. The endpoint URL will be in the below format, with your own custom subdomain, and app ID and endpoint key replacing APP-ID, and KEY_ID. Go to the end of the URL and enter an intent; for example, “get me all the items from the grocery list”. The JSON result will identify the top scoring intent and prediction with a confidence score. This is a good test to see if LUIS can learn what should be predicted with the intent.&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE width="625"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="625"&gt;
&lt;P&gt;&lt;A href="https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/APP-ID/slots/production/predict?subscription-key=KEY-ID&amp;amp;verbose=true&amp;amp;show-all-intents=true&amp;amp;log=true&amp;amp;query=YOUR_QUERY_HERE" target="_blank" rel="noopener"&gt;https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/APP-ID/slots/production/predict?subscription-key=KEY-ID&amp;amp;verbose=true&amp;amp;show-all-intents=true&amp;amp;log=true&amp;amp;query=YOUR_QUERY_HERE&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Additional Ideas&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;We’ve now seen how to build a voice bot leveraging Azure services to automate a common task. We hope it gives you a good starting point towards building bots for other scenarios as well. Try out some of the ideas below to continue building upon your bot and exploring additional Azure AI services.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add Google Home assistant as an additional channel to receive voice commands.&lt;/LI&gt;
&lt;LI&gt;Add a PictureBot extension to your bot and add pictures of your grocery items. You will need to create intents that trigger actions that the bot can take, and create entities that require these actions. For example, an intent for the PictureBot may be “SearchPics”. This could trigger Azure Cognitive Search to look for photos, using a “facet” entity to know what to search for. See what other functionality you can come up with!&lt;/LI&gt;
&lt;LI&gt;Use &lt;A href="https://www.qnamaker.ai/" target="_blank" rel="noopener"&gt;Azure QnA maker&lt;/A&gt; to enable your bot to answer FAQs from a knowledge base. Add a bit of personality using the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/chit-chat-knowledge-base?tabs=v1" target="_blank" rel="noopener"&gt;chit-chat&lt;/A&gt; feature.&lt;/LI&gt;
&lt;LI&gt;Integrate &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/" target="_blank" rel="noopener"&gt;Azure Personalizer&lt;/A&gt; with your voice chatbot to enables the bot to recommend a list of products to the user, providing a personalized experience.&lt;/LI&gt;
&lt;LI&gt;Include &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview" target="_blank" rel="noopener"&gt;Azure Speech service&lt;/A&gt; to give your bot a custom, high quality voice, with 200+ Text to Speech options across 54 different locales/languages, as well as customizable Speech to Text capabilities to process voice inputs.&lt;/LI&gt;
&lt;LI&gt;Try building this bot using &lt;A style="font-family: inherit; background-color: #ffffff;" href="https://docs.microsoft.com/en-us/composer/introduction" target="_blank" rel="noopener"&gt;Bot Framework Composer&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt;, a visual authoring canvas.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 00:30:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-voice-enabled-grocery-chatbot-with-azure-ai/ba-p/2096079</guid>
      <dc:creator>wmendoza</dc:creator>
      <dc:date>2021-01-26T00:30:09Z</dc:date>
    </item>
    <item>
      <title>How to build an intelligent travel journal using Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-an-intelligent-travel-journal-using-azure-ai/ba-p/2095168</link>
      <description>&lt;P&gt;AI capabilities can enhance many types of applications, enabling you to improve your customer experience and solve complex problems. With Azure Cognitive Services, you can easily access and customize industry-leading AI models, using the tools and languages of your choice.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through an exercise which you can complete in under an hour, to get started using Azure AI Services. Many of us are dreaming of traveling again, and building this intelligent travel journal app can help you capture memories from your next trip, whenever that may be. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application&lt;/U&gt;&lt;/STRONG&gt;&lt;U&gt;:&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-align-left"&gt;Capture voice memos, voice tag photos, and transcribe speech to text.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Automatically tag your photos based on key phrase extraction and analysis of text in pictures.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Translate tags and text into desired language.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Organize your memos by key phrase and find similar travel experiences you enjoyed with AI-powered search.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="travel blog app image.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249197i5D7A090CB0DC4851/image-size/medium?v=v2&amp;amp;px=400" role="button" title="travel blog app image.jpg" alt="travel blog app image.jpg" /&gt;&lt;/span&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/?OCID=AID3024570" target="_self"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://ms.portal.azure.com/?OCID=AID3024570" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;To run the provided &lt;A href="https://github.com/Azure-Samples/AIDeveloperResources" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;, you will need &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fvsdownload&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638687346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=cKXR10KgmYmjZ8k5vFnzlNUcZGMl38oqoXHwsILIKj4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Visual Studio 2019&lt;/A&gt; and &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdotnet.microsoft.com%2Flearn%2Fdotnet%2Fhello-world-tutorial%2Fintro&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638697298%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=IPinEHwdLDuphf1OMth%2BmbGGiD0Sgy5qk95jAzHLTTA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;.NET Core 3.1&lt;/A&gt; or above (for FotoFly)&lt;/LI&gt;
&lt;LI&gt;Refer to this &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fdotnet%2Fcore%2Ftutorials%2Fpublishing-with-visual-studio&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638697298%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=%2BG5LCd6x4TCjwpvzkzqn1szVyALEW94EaxVd1eFeqww%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;tutorial&lt;/A&gt; for detailed guidance on how to publish a console app.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Key Azure technologies:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Speech Service &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription" target="_blank" rel="noopener"&gt;batch transcription&lt;/A&gt; for speech to text transcription&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt; for key phrase/intent extraction&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/" target="_blank" rel="noopener"&gt;Computer Vision&lt;/A&gt; for analyzing text in images&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/translator/reference/v3-0-translate" target="_blank" rel="noopener"&gt;Translator&lt;/A&gt; to normalize tags/text into desired language.&lt;/LI&gt;
&lt;LI&gt;Open Source &lt;A href="http://www.java2s.com/Open-Source/CSharp_Free_Code/Windows_Presentation_Foundation_Library/Download_Fotofly_Photo_Metadata_Library.htm" target="_blank" rel="noopener"&gt;FotoFly&lt;/A&gt; library for photo tagging. Alternatively, you can use blob metadata but functionality will be limited.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/search/" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; for AI-powered search.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;NOTE:&amp;nbsp; &lt;EM&gt;For more information refer to the “&lt;U&gt;References.txt&lt;/U&gt;” file under respective folders within JournalHelper library project in the provided sample solution with this blog.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Solution Architecture&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="travel blog architecture image.png" style="width: 699px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249198i15CF06412667F92D/image-size/large?v=v2&amp;amp;px=999" role="button" title="travel blog architecture image.png" alt="travel blog architecture image.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;User records a voice memo; for example, to accompany an image they’ve captured. The recorded file is stored in a file repository (alternatively, you could use a &lt;A href="https://azure.microsoft.com/solutions/databases" target="_blank" rel="noopener"&gt;database&lt;/A&gt;).&lt;/LI&gt;
&lt;LI&gt;The recorded voice memo (e.g. .m4a) is converted into desired format (e.g. .wav), using Azure’s Speech Service batch transcription capability.&lt;/LI&gt;
&lt;LI&gt;The folder containing voice memos is uploaded to a Blob container.&lt;/LI&gt;
&lt;LI&gt;Images are uploaded into a separate container for analysis of any text within the photos, using Azure Computer Vision.&lt;/LI&gt;
&lt;LI&gt;Use Translator to translate text to different languages, as needed. This may be useful to translate foreign street signs, menus, or other text in images.&lt;/LI&gt;
&lt;LI&gt;Extract tags from the generated text files using Text Analytics, and send tags back to the corresponding image file. Tags can be travel related (#milan, #sunset, #Glacier National Park), or based on geotagging metadata, photo metadata (camera make, exposure, ISO), and more.&lt;/LI&gt;
&lt;LI&gt;Create a search indexer with Azure Cognitive Search, and use the generated index to search your intelligent travel journal.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Implementation&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Sample code&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The entire solution code is available for download at this &lt;A href="https://github.com/Azure-Samples/AIDeveloperResources" target="_blank" rel="noopener"&gt;link.&lt;/A&gt; Download/clone and follow instructions in ReadMe.md solution item for further setup.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Implementation summary&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The sample is implemented using various client libraries and samples available for Azure Cognitive Services. All these services are grouped together into a helper library project named “journalhelper”. In the library we introduce a helper class to help with scenarios that combine various Cognitive Services to achieve desired functionality.&lt;/P&gt;
&lt;P&gt;We use “.Net Core console app” as the front end to test the scenarios. This sample also uses another open source library (FotoFly), which is ported to .Net Core here, to access and edit image metadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps, along with sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Start by batch transcribing voice memos and extracting key tags from the text output. Group the input voice memos into a folder, upload them into an Azure Blob container or specify a list of their URls, and use batch transcription to get results back into the Azure Blob container, as well as a folder in your file system. The following code snippet illustrates how helper functions can be grouped together for a specific functionality. It combines local file system, Azure storage containers, and Cognitive Services speech batch transcription API.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;Console.WriteLine("Uploading voice memos folder to blob container...");
Helper.UploadFolderToContainer(
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.VoiceMemosFolder),
customSettings.AzureBlobContainers.InputVoiceMemoFiles, deleteExistingContainer);
Console.WriteLine("Branch Transcribing voice memos using containers...");
//NOTE: Turn the pricing tier for Speech Service to standard for this below to work.

await Helper.BatchTranscribeVoiceMemosAsync(
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
          customSettings.SpeechConfigSettings.Key,
          customSettings.SpeechConfigSettings.Region);

Console.WriteLine("Extract transcribed text files into another container and folder, delete the intermediate container with json files...");

await Helper.ExtractTranscribedTextfromJsonAsync(
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.ExtractedTranscribedTexts,
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.BatchTranscribedFolder), true);
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Next, create tags from the transcribed text. Sample helper function using the Text Analytics client library is listed below.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;//text analytics
public static void CreateTagsForFolderItems(string key, string endpoint, string batchTranscribedFolder, string extractedTagsFolder)
{
    if (!Directory.Exists(batchTranscribedFolder))
    {
       Console.WriteLine("Input folder for transcribed files does not exist");
       return;
    }

    // ensure destination folder path exists
    Directory.CreateDirectory(extractedTagsFolder);
    TextAnalyticsClient textClient = TextAnalytics.GetClient(key, endpoint);

    var contentFiles = Directory.EnumerateFiles(batchTranscribedFolder);
    foreach(var contentFile in contentFiles
    {
var tags = TextAnalytics.GetTags(textClient, 
contentFile).ConfigureAwait(false).GetAwaiter().GetResult();

// generate output file with tags 
string outFileName = Path.GetFileNameWithoutExtension(contentFile);
                outFileName += @"_tags.txt";
string outFilePath = Path.Combine(extractedTagsFolder, outFileName);
File.WriteAllLinesAsync(outFilePath, tags).Wait() ;
    }
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The actual client library or service calls are made as shown:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;static public async Task&amp;lt;IEnumerable&amp;lt;string&amp;gt;&amp;gt; GetTags(TextAnalyticsClient 
client, string inputTextFilePath)
{
   string inputContent = await File.ReadAllTextAsync(inputTextFilePath);
   var entities = EntityRecognition(client, inputContent);
   var phrases = KeyPhraseExtraction(client, inputContent);
   var tags = new List&amp;lt;string&amp;gt;();
   tags.AddRange(entities);
   tags.AddRange(phrases);
   return tags;
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Update tags to the photo/image file, using the open source FotoFly library.&amp;nbsp; Alternatively, you can update the Blob metadata with these tags and include that in the search index, but the functionality will be limited to using Azure Blob storage.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;string taggedPhotoFile = photoFile.Replace(inputPhotosFolder,    
      OutPhotosFolder);
File.Copy(photoFile, taggedPhotoFile, true);

if (tags.Count &amp;gt; 0)
{
    ImageProperties.SetPhotoTags(taggedPhotoFile, tags);
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Other useful functions to complete the scenario are:
&lt;OL&gt;
&lt;LI&gt;Helper.ProcessImageAsync, and&lt;/LI&gt;
&lt;LI&gt;Helper.TranslateFileContent&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The first one can be used to extract text from images using OCR or regular text processing using Computer Vision. The second can detect the source language, translate using Azure’s Translator service into the desired output language, and then create more tags for an image file.&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Finally, use Azure Cognitive Search to create an index from the extracted text files saved in the Blob container, enabling you to search for documents and create journal text files. For example, you can search for images by cities or countries visited, date, or even cuisines. You can also search for images by camera-related metadata or geolocation.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In this sample we have demonstrated simple built-in skillsets for entity and language detection. The solution can be further enhanced by adding additional data sources to process tagged images and their metadata, and adding additional information to the searches.&lt;/P&gt;
&lt;P&gt;NOTE:&amp;nbsp; &lt;EM&gt;The helper functions can be made more generic to take additional skillset input.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;public static async Task CreateSearchIndexerAsync(
    string serviceAdminKey, string searchSvcUrl,
    string cognitiveServiceKey,
    string indexName, string jsonFieldsFilePath,
    string blobConnectionString, string blobContainerName
    )
{
    // Its a temporary arrangment.  This function is not complete
    IEnumerable&amp;lt;SearchField&amp;gt; fields = SearchHelper.LoadFieldsFromJSonFile(jsonFieldsFilePath);

    // create index
    var searchIndex = await 
Search.Search.CreateSearchIndexAsync(serviceAdminKey, 
searchSvcUrl, indexName, fields.ToList());

    // get indexer client
    var indexerClient = 
Search.Search.GetSearchIndexerClient(serviceAdminKey, searchSvcUrl);

    // create azure blob data source
    var dataSource = await 
Search.Search.CreateOrUpdateAzureBlobDataSourceAsync(indexerClient, 
blobConnectionString, indexName, blobContainerName);

    // create indexer

    // create skill set with minimal skills
    List&amp;lt;SearchIndexerSkill&amp;gt; skills = new List&amp;lt;SearchIndexerSkill&amp;gt;();
            skills.Add(Skills.CreateEntityRecognitionSkill());
            skills.Add(Skills.CreateLanguageDetectionSkill());
     var skillSet = await 
Search.Search.CreateOrUpdateSkillSetAsync(indexerClient,
             indexName + "-skillset", skills, cognitiveServiceKey);

     var indexer = await Search.Search.CreateIndexerAsync(indexerClient, 
dataSource, skillSet, searchIndex);

     // wait for some time to have indexer run and load documents
     Thread.Sleep(TimeSpan.FromSeconds(20));

     await Search.Search.CheckIndexerOverallStatusAsync(indexerClient, 
             indexer);
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, search documents and generate the corresponding journal files, utilizing the following functions:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Helper.SearchDocuments&lt;/LI&gt;
&lt;LI&gt;Helper.CreateTravelJournal&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Additional Ideas&lt;/H2&gt;
&lt;P&gt;In addition to the functionality described so far, there are many other ways you can &amp;nbsp;leverage Azure AI to further enhance your intelligent travel journal and learn more advanced scenarios. We encourage you to explore some the following ideas to enrich your app:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add real time voice transcription and store transcriptions in an &lt;A href="https://azure.microsoft.com/solutions/databases/" target="_blank" rel="noopener"&gt;Azure managed database&lt;/A&gt;, to correlate voice transcription with images in context.&lt;/LI&gt;
&lt;LI&gt;Include travel tickets and receipts as images for OCR-based image analysis (&lt;A href="https://azure.microsoft.com/en-gb/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;) and include them as journal artifacts.&lt;/LI&gt;
&lt;LI&gt;Use multiple data sources for a given search index. We have simplified and only included text files to index in this sample, but you can include the tagged photos from a different data source for the same search index.&lt;/LI&gt;
&lt;LI&gt;Add custom skills and data extraction for &lt;A href="https://docs.microsoft.com/en-us/azure/search/search-indexer-overview" target="_blank" rel="noopener"&gt;search indexer&lt;/A&gt;. Extract metadata from images and include as search content.&lt;/LI&gt;
&lt;LI&gt;Extract metadata from video and audio content using &lt;A href="https://azure.microsoft.com/services/media-services/video-indexer/" target="_blank" rel="noopener"&gt;Video Indexer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Experiment with &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;Language Understanding&lt;/A&gt; and generate more elaborate and relevant search content based on top scoring intents and entities. Sample keywords and questions related to current sample data are included in Objectives.docx solution item.&lt;/LI&gt;
&lt;LI&gt;Build a consumer front-end app that stitches all of this together and displays the journal in a UI.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 00:26:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-an-intelligent-travel-journal-using-azure-ai/ba-p/2095168</guid>
      <dc:creator>maddybutzbach</dc:creator>
      <dc:date>2021-01-26T00:26:41Z</dc:date>
    </item>
    <item>
      <title>How to build a personal finance app using Azure</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/ba-p/2088995</link>
      <description>&lt;P&gt;AI allows you to deliver breakthrough experiences in your apps. With Azure Cognitive Services, you can easily customize and deploy the same AI models that power Microsoft’s products, such as Xbox and Bing, using the tools and languages of your choice.&lt;/P&gt;
&lt;P&gt;In this blog we will walk through an exercise that you can complete in under an hour and learn how to build an application that can be useful for you, all while exploring a set of Azure services. If you have ever wanted to get your financial transactions in order, look no further. With this exercise, we’ll explore how to quickly take a snap of a receipt from your phone and upload it for categorization, creating expense reports, and to gain insights to your spending. Remember, even though we’ll walk you through each step, you can always explore the sample code and get creative with your own unique solution!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Snap a picture of your receipt and upload it using your smartphone&lt;/LI&gt;
&lt;LI&gt;Extract relevant data from the images: Who issued the receipt? What was the total amount? What was purchased? All of this information can be effortlessly stored for exploration&lt;/LI&gt;
&lt;LI&gt;Query the data: bring your receipts to life by extracting relevant and insightful information&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://azure.microsoft.com/en-us/features/azure-portal/" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;You will need to have &lt;A href="https://www.python.org/downloads/" target="_blank" rel="noopener"&gt;python&lt;/A&gt; installed locally to run some of the samples.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;Key&amp;nbsp;Azure technologies:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Azure Form Recognizer&lt;/A&gt; scans image documents with optical character recognition and extracts text, key/value pairs, and tables from documents, receipts, and forms.&lt;/LI&gt;
&lt;LI&gt;Form Recognizer’s &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts?tabs=v2-0" target="_blank" rel="noopener"&gt;prebuilt receipt model&lt;/A&gt; specifically extracts receipt data&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/storage/" target="_blank" rel="noopener"&gt;Azure Blob Storage&lt;/A&gt; is used to store data&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; enriches the data by making it easily identifiable&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Solution Architecture&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ReceiptUploaderSolutionArchitecture.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248818iEB5D2E8135093DAA/image-size/large?v=v2&amp;amp;px=999" role="button" title="ReceiptUploaderSolutionArchitecture.png" alt="ReceiptUploaderSolutionArchitecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;User uploads a receipt image from their mobile device&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;The uploaded image is verified and then sent to the Azure Form Recognizer to extract information&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The image is analysed by the REST API within the Form Recognizer prebuilt receipt model&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;A JSON is returned that has both the text information and bounding box coordinates of the extracted receipt data&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;The resulting JSON is parsed and a simpler JSON is formed, saving only the relevant information needed&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;This receipt JSON is then stored in Azure Blob Storage &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Azure Cognitive Search points directly to Azure Blob Storage and is used to index the data &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The application queries this search index to extract relevant information from the receipts&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Another visual of the flow of data within the solution architecture is shown below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="FlowChart.png" style="width: 602px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249275i243E605E6E4CDDD1/image-size/large?v=v2&amp;amp;px=999" role="button" title="FlowChart.png" alt="FlowChart.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we’ve explored the technology and services we’ll be using, let’s dive into building our app!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Implementation&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;To get started, data from receipts must be extracted; this is done by setting up the Form Recognizer service in Azure and connecting to the service to use the relevant API for receipts. A JSON is returned that contains the information extracted from receipts and is stored in Azure Blob Storage to be used by Azure Cognitive Search. Cognitive Search is then utilized to index the receipt data, and to search for relevant information.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps, along with sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to the Azure portal and&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" target="_blank" rel="noopener"&gt;create a new Form Recognizer resource&lt;/A&gt;&lt;/SPAN&gt;. In the&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;pane, provide the following information:&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;A descriptive name for your resource.&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Subscription&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;Select the Azure subscription which has been granted access.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Location&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Pricing Tier&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The cost of your resource depends on the pricing tier you choose and your usage. For more information, see the API&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://azure.microsoft.com/pricing/details/cognitive-services/" target="_blank" rel="noopener"&gt;pricing details&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Resource Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group" target="_blank" rel="noopener"&gt;Azure resource group&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;that will contain your resource. You can create a new group or add it to a pre-existing group.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;After Form Recognizer deploys, go to All Resources and locate the newly deployed resource. Save the key and endpoint from the resource’s key and endpoint page somewhere so you can access it later. &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;You can use the following &lt;SPAN&gt;&lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeReceiptAsync" target="_blank" rel="noopener"&gt;Analyze Receipt API&lt;/A&gt;&lt;/SPAN&gt; to start analyzing the receipt. Remember to replace &amp;lt;endpoint&amp;gt; &amp;amp; &amp;lt;subscription key&amp;gt; the values you saved earlier and replace &amp;lt;path to your receipt&amp;gt; with the local path to your scanned receipt image.&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Analyse script

import json
import time
from requests import get, post

# Endpoint URL
endpoint = r"&amp;lt;endpoint url&amp;gt;"
apim_key = "&amp;lt;subscription key&amp;gt;"
post_url = endpoint + "/formrecognizer/v2.0/prebuilt/receipt/analyze"
source = r"&amp;lt;path to your receipt&amp;gt;"

headers = {
    # Request headers
    'Content-Type': 'image/jpeg',
    'Ocp-Apim-Subscription-Key': apim_key,
}

params = {
    "includeTextDetails": True
}

with open(source, "rb") as f:
    data_bytes = f.read()

try:
    resp = post(url=post_url, data=data_bytes, headers=headers, params=params)
    if resp.status_code != 202:
        print("POST analyze failed:\n%s" % resp.text)
        quit()
    print("POST analyze succeeded:\n%s" % resp.headers)
    get_url = resp.headers["operation-location"]
except Exception as e:
    print("POST analyze failed:\n%s" % str(e))
    quit()
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;If you run this code and everything is as it should be, you'll receive a&amp;nbsp;&lt;STRONG&gt;202 (Success)&lt;/STRONG&gt;&amp;nbsp;response that includes an&amp;nbsp;&lt;STRONG&gt;Operation-Location&lt;/STRONG&gt;&amp;nbsp;header, which the script will print to the console. This header contains an &lt;STRONG&gt;operation id&lt;/STRONG&gt; that you can use to query the status of the asynchronous operation and get the results. In the following example value, the string after&amp;nbsp;operations/&amp;nbsp;is the operation ID.&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="601"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://cognitiveservice/formrecognizer/v2.0/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb" target="_blank" rel="noopener"&gt;https://cognitiveservice/formrecognizer/v2.0/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Now you can call the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/GetAnalyzeReceiptResult" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Get Analyze Receipt Result&lt;/STRONG&gt;&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;API to get the Extracted Data.&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Get results.
n_tries = 10
n_try = 0
wait_sec = 6
while n_try &amp;lt; n_tries:
    try:
        resp = get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
        resp_json = json.loads(resp.text)
        if resp.status_code != 200:
            print("GET Receipt results failed:\n%s" % resp_json)
            quit()
        status = resp_json["status"]
        if status == "succeeded":
            print("Receipt Analysis succeeded:\n%s" % resp_json)
            quit()
        if status == "failed":
            print("Analysis failed:\n%s" % resp_json)
            quit()
        # Analysis still running. Wait and retry.
        time.sleep(wait_sec)
        n_try += 1
    except Exception as e:
        msg = "GET analyze results failed:\n%s" % str(e)
        print(msg)
        quit()
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This code uses the operation id and makes another API call. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;The JSON that is returned can be examined to get the required information - ‘readResults’ field will contain all lines of text that was decipherable, and the ‘documentResults’ field contains ‘key/value’ information for the most relevant parts of the receipt (e.g. the merchant, total, line items etc.)&lt;SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;The receipt image below, &lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt; &lt;SPAN&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="contosoReceipt.jpg" style="width: 496px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249243iEE3167D5881B66D2/image-size/large?v=v2&amp;amp;px=999" role="button" title="contosoReceipt.jpg" alt="contosoReceipt.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
resulted in the JSON from which we have extracted the following details: &lt;SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="json"&gt; MerchantName: THE MAD HUNTER 
 TransactionDate: 2020-08-23 
 TransactionTime: 22:07:00 
 Total: £107.10 &lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;We will now create a JSON from all the data extracted from the analysed receipt. The structure of the JSON is shown below:&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;{
   "id":"INV001",
   "user":"Sujith Kumar",
   "createdDateTime":"2020-10-23T17:16:32Z",
   "MerchantName":"THE MAD HUNTER",
   "TransactionDate":"2020-10-23",
   "TransactionTime":"22:07:00",
   "currency":"GBP",
   "Category":"Entertainment",
   "Total":"107.10",
   "Items":[	]
}​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;We can now save this JSON and build a search service to extract the information we want from it.&lt;/P&gt;
&lt;P&gt;Before continuing onto step 8, you must have an Azure Storage Account with Blob storage.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;We will now save the JSON files in an &lt;STRONG&gt;Azure Blob Storage&lt;/STRONG&gt; container and use it as a source for the &lt;STRONG&gt;Azure Cognitive Search Service Index&lt;/STRONG&gt; that we will create. &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Sign-in to the Azure Portal and search for "Azure Cognitive Search" or navigate to the resource through&amp;nbsp;&lt;STRONG&gt;Web&lt;/STRONG&gt;&amp;nbsp;&amp;gt;&amp;nbsp;&lt;STRONG&gt;Azure Cognitive Search&lt;/STRONG&gt;. Follow the steps to:&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;Choose a subscription&lt;/LI&gt;
&lt;LI&gt;Set a resource group&lt;/LI&gt;
&lt;LI&gt;Name the service appropriately&lt;/LI&gt;
&lt;LI&gt;Choose a location&lt;/LI&gt;
&lt;LI&gt;Choose a pricing tier for this service&lt;/LI&gt;
&lt;LI&gt;Create your service&lt;/LI&gt;
&lt;LI&gt;Get a key and URL endpoint &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We will use the free Azure service, which means you can create three indexes, three data sources and three indexers. The dashboard will show you how many of each you have left. For this exercise you will create one of each.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="10"&gt;
&lt;LI&gt;In the portal, find the search service you created above and click &lt;STRONG&gt;Import data&lt;/STRONG&gt; on the command bar to start the wizard. In the wizard, click on Connect to your data and specify the name, type, and connection information. Skip the ‘Enrich Content’ page and go to &lt;STRONG&gt;Customize Target Index.&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;For this exercise, we will use the wizard to generate a basic index for our receipt data. Minimally, an index requires a name and a fields collection; one of the fields should be marked as the document key to uniquely identify each document.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Fields have data types and attributes. The check boxes across the top are&amp;nbsp;&lt;EM&gt;index attributes&lt;/EM&gt;&amp;nbsp;controlling how the field is used.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Retrievable&lt;/STRONG&gt;&amp;nbsp;means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Key&lt;/STRONG&gt;&amp;nbsp;is the unique document identifier. It's always a string, and it is required.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Filterable&lt;/STRONG&gt;,&amp;nbsp;&lt;STRONG&gt;Sortable&lt;/STRONG&gt;, and&amp;nbsp;&lt;STRONG&gt;Facetable&lt;/STRONG&gt;&amp;nbsp;determine whether fields are used in a filter, sort, or faceted navigation structure.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Searchable&lt;/STRONG&gt;&amp;nbsp;means that a field is included in full text search. Only Strings are searchable.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make sure you choose the following fields:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;id&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;user&lt;/LI&gt;
&lt;LI&gt;createdDateTime&lt;/LI&gt;
&lt;LI&gt;MerchantName&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;TransactionDate&lt;/LI&gt;
&lt;LI&gt;TransactionTime&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Currency&lt;/LI&gt;
&lt;LI&gt;Category&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Total&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="12"&gt;
&lt;LI&gt;Still in the&amp;nbsp;&lt;STRONG&gt;Import data&lt;/STRONG&gt;&amp;nbsp;wizard, click&amp;nbsp;&lt;STRONG&gt;Indexer&lt;/STRONG&gt;&amp;nbsp;&amp;gt;&amp;nbsp;&lt;STRONG&gt;Name&lt;/STRONG&gt;, and type a name for the indexer.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This object defines an executable process. For now, use the default option (&lt;STRONG&gt;Once&lt;/STRONG&gt;) to run the indexer once, immediately.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="13"&gt;
&lt;LI&gt;Click&amp;nbsp;&lt;STRONG&gt;Submit&lt;/STRONG&gt;&amp;nbsp;to create and simultaneously run the indexer.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Soon you should see the newly created indexer in the list, with status indicating "in progress" or success, along with the number of documents indexed.&lt;/P&gt;
&lt;P&gt;The main service page provides links to the resources created in your Azure Cognitive Search service. To view the index you just created, click&amp;nbsp;&lt;STRONG&gt;Indexes&lt;/STRONG&gt;&amp;nbsp;from the list of links.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="step13.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248821iFBF89B5FE72FA88E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="step13.png" alt="step13.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="14"&gt;
&lt;LI&gt;Click on the index (&lt;EM&gt;azureblob-indexer&lt;/EM&gt; in this case) from the list of links and view the index-schema.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Now you should have a search index that you can use to query the receipt data that’s been extracted from the uploaded receipts.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="15"&gt;
&lt;LI&gt;Click the search explorer&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="15.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249238i4DD762412E1DBF15/image-size/medium?v=v2&amp;amp;px=400" role="button" title="15.png" alt="15.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="16"&gt;
&lt;LI&gt;From the index drop down choose the relevant index. Choose the default API Version (2020-06-30) for this exercise.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="16.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249276i6A72C8DAABEBDD28/image-size/medium?v=v2&amp;amp;px=400" role="button" title="16.png" alt="16.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="17"&gt;
&lt;LI&gt;In the search bar paste a query string (for eg. &lt;STRONG&gt;category='Entertainment'&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;You will get results as verbose JSON documents as shown below:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="17.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248827iD8A00676A9051ED7/image-size/large?v=v2&amp;amp;px=999" role="button" title="17.png" alt="17.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that you have built a query indexer and aimed it at your data you can now use it to build queries programmatically and extract information to answer some of the following questions:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How much did I spend last Thursday?&lt;/LI&gt;
&lt;LI&gt;How much have I spent on entertainment over the last quarter?&lt;/LI&gt;
&lt;LI&gt;Did I spend anything at ‘The Crown and Pepper’ last month?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Additional Ideas&lt;/H2&gt;
&lt;P&gt;In addition to the services and functionalities used throughout this exercise, there are numerous other ways you can use Azure AI to build in support for all kinds of receipts or invoices. For example, the logo extractor can be used to identify logos of popular restaurants or hotel chains, and the business card model can ingest business contact information just as easily as we saw with receipts.&lt;/P&gt;
&lt;P&gt;We encourage you to explore some of the following ideas to enrich your application:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Search invoices for specific line items&lt;/LI&gt;
&lt;LI&gt;Train the models to recognize different expense categories such as entertainment, supplies, etc.&lt;/LI&gt;
&lt;LI&gt;Add &lt;A href="https://docs.microsoft.com/en-gb/azure/cognitive-services/LUIS/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Language Understanding (LUIS)&lt;/STRONG&gt;&lt;/A&gt; to ask your app questions in natural language and extract formatted reports&lt;/LI&gt;
&lt;LI&gt;Add &lt;A href="https://azure.microsoft.com/services/cognitive-services/qna-maker/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure QnA Maker&lt;/STRONG&gt;&lt;/A&gt; to your app and get insights such as how much you spent on entertainment last month, or other categories of insights you’d like to explore&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 01:19:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/ba-p/2088995</guid>
      <dc:creator>mernanashed</dc:creator>
      <dc:date>2021-01-26T01:19:08Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2095091#M152</link>
      <description>&lt;P&gt;&lt;LI-USER uid="590027"&gt;&lt;/LI-USER&gt;&amp;nbsp;I've sent a couple of emails to the team and have yet to hear. Wanted to make sure I was sending it to the proper email address. Thank you.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Jan 2021 18:50:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2095091#M152</guid>
      <dc:creator>aowens-jmt</dc:creator>
      <dc:date>2021-01-25T18:50:41Z</dc:date>
    </item>
    <item>
      <title>Re: QnA with Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/bc-p/2091020#M151</link>
      <description>&lt;P&gt;&lt;LI-USER uid="939196"&gt;&lt;/LI-USER&gt;,&amp;nbsp;this is brilliant feedback. We will put this in our backlog.&lt;/P&gt;</description>
      <pubDate>Sun, 24 Jan 2021 09:00:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/bc-p/2091020#M151</guid>
      <dc:creator>pchoudhari</dc:creator>
      <dc:date>2021-01-24T09:00:36Z</dc:date>
    </item>
    <item>
      <title>Re: QnA with Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/bc-p/2087274#M149</link>
      <description>&lt;P&gt;There are a few gaps with Q&amp;amp;A at the moment which I hope the product team addresses.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The first is the inability to associate metadata with the content as it's being ingested from a file or URL source.&amp;nbsp; For example, if you upload a PDF, it would be useful to be able to provide key-value pairs which are automatically set as metadata on the resultant index from that source.&amp;nbsp; At current, the process requires re-processing each item in the index and evaluating the source and then assigning metadata to the item.&amp;nbsp; This functionality already exists on the API as long as you are starting from a QnADTO object rather than a file.&amp;nbsp; A simple example is that you may want to associate a version identifier to a document that is ingested so that each Q&amp;amp;A entry has a version metadata tag associated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The second is to "downgrade" a response.&amp;nbsp; Currently, you can kind of "promote" a response by adding an alternate phrasing to respond to a broader set of vocabularies, but there's no way to make a Q&amp;amp;A item &lt;EM&gt;less&lt;/EM&gt; relevant.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It's a great service, but still has room for growth.&lt;/P&gt;</description>
      <pubDate>Fri, 22 Jan 2021 15:53:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/bc-p/2087274#M149</guid>
      <dc:creator>CharlieDigital</dc:creator>
      <dc:date>2021-01-22T15:53:34Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2073433#M147</link>
      <description>&lt;P&gt;&lt;LI-USER uid="590027"&gt;&lt;/LI-USER&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks. This would definitely help. I will give it a go.&lt;/P&gt;</description>
      <pubDate>Tue, 19 Jan 2021 09:17:55 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2073433#M147</guid>
      <dc:creator>HesselW</dc:creator>
      <dc:date>2021-01-19T09:17:55Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2071621#M146</link>
      <description>&lt;P&gt;&lt;LI-USER uid="881335"&gt;&lt;/LI-USER&gt;&amp;nbsp;and&amp;nbsp;&lt;LI-USER uid="913387"&gt;&lt;/LI-USER&gt;&amp;nbsp;Its great to hear that you are liking the new version of QnA Maker. We use two indexes per KB, only when you make the language setting KB specific instead of service specific, which is required to create your testing experience and relevance scores same as what you will see once published. In case you are creating KBs belonging to only one language in a service, then please don't use this setting when you are creating your first KB as this setting is allowed to be set only at the time of first KB creation once set you cannot update this setting. Please check:&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support?tabs=v2#supporting-multiple-languages-in-one-qna-maker-resource" target="_blank"&gt;Language support - QnA Maker - Azure Cognitive Services | Microsoft Docs&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jan 2021 17:26:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2071621#M146</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2021-01-18T17:26:27Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2071604#M145</link>
      <description>&lt;P&gt;&lt;LI-USER uid="897461"&gt;&lt;/LI-USER&gt;&amp;nbsp;Please feel free to drop us a mail at qnamakersupport@microsoft.com&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jan 2021 17:20:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2071604#M145</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2021-01-18T17:20:45Z</dc:date>
    </item>
    <item>
      <title>Re: Ignite 2020 Neural TTS updates: new language support, more voices and flexible deployment option</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/bc-p/2069557#M144</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;LI-USER uid="906662"&gt;&lt;/LI-USER&gt;&amp;nbsp;Thank you for reporting the issue. We are investigating the cause and will fix if it's a bug.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have a support plan and you need technical help, you can create a&amp;nbsp;&lt;A href="https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest" target="_blank" rel="noopener"&gt;support request&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;For&amp;nbsp;&lt;EM&gt;Issue type&lt;/EM&gt;, select&amp;nbsp;“Technical”.&lt;/LI&gt;
&lt;LI&gt;For&amp;nbsp;&lt;EM&gt;Subscription&lt;/EM&gt;, select your subscription.&lt;/LI&gt;
&lt;LI&gt;For&amp;nbsp;&lt;EM&gt;Service&lt;/EM&gt;, click&amp;nbsp;My services, then select “Cognitive Services”.&lt;/LI&gt;
&lt;LI&gt;For&amp;nbsp;&lt;EM&gt;Summary&lt;/EM&gt;, type a description of your issue.&lt;/LI&gt;
&lt;LI&gt;For&amp;nbsp;&lt;EM&gt;Problem type&lt;/EM&gt;, select&amp;nbsp;“Text to Speech”.&lt;/LI&gt;
&lt;LI&gt;For problem subtype, select “accuracy of speech output”.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Jan 2021 14:29:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/bc-p/2069557#M144</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2021-01-27T14:29:36Z</dc:date>
    </item>
    <item>
      <title>Re: Ignite 2020 Neural TTS updates: new language support, more voices and flexible deployment option</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/bc-p/2064973#M143</link>
      <description>&lt;P&gt;&lt;SPAN class="style-scope yt-formatted-string"&gt;Hi Sir, I am really enjoying the audio content creation so far, but I got to ask a question as there seems to be a problem within the audio content creation page.&lt;/SPAN&gt; &lt;SPAN class="style-scope yt-formatted-string"&gt;I am especially using neural voices, but for the last few days, I am adjusting the RATE first, then INTONATION to make the pronounciation better like real speech, but as I am ADJUSTING THE INTONATION, THE RATE JUST GOES TO THE BASE TO 1.00. And also, when adjusting the INTONATION, IF I HAVE A RATE SET BEFORE, IT DOES NOT PREVIEW THE ADJUSTED INTONATION. I have been having this problem for some 3-5 days, and it seems like some kind of n annoying problem as I can't create the voices I have been for the last few weeks. I would appreciate if you can correct this problem. Thank you so much.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Jan 2021 17:04:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/bc-p/2064973#M143</guid>
      <dc:creator>serhat1141</dc:creator>
      <dc:date>2021-01-15T17:04:24Z</dc:date>
    </item>
    <item>
      <title>Enhanced Table Extraction from documents with Form Recognizer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011</link>
      <description>&lt;P&gt;&lt;EM&gt;Authors: Lei Sun, Neta Haiby, Cha Zhang, Sanjeev Jagtap&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Documents containing tables pose a major hurdle for information extraction. Tables are often found in financial documents, legal documents, insurance documents, oil and gas documents and more. Tables in documents are often the most important part of the document but extracting data from tables in documents presents a unique set of challenges.&amp;nbsp;Challenges include an accurate detection of the tabular region within an image, and subsequently detecting and extracting information from the rows and columns of the detected table, merged cells, complex tables, nested tables and more. Table extraction is the task of detecting the tables within the document and extracting them into a structured output that can be consumed by workflow applications such as robotic process automation (RPA) services, data analyst tools such as excel, databases and search services.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Table-slides.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246195i55F9B8A11D006D71/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table-slides.gif" alt="Table-slides.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Customers often use manual processes for data extraction and digitization. However, with the new enhanced table extraction feature you can send a document (PDF or images) to Form Recognizer for extraction of all the information into a structured usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246196i19F9FDF96FA549FE/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 1.png" alt="Table Blog 1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Table extraction challenges&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Table extraction from a wide variety of document images is a challenging problem due to the heterogeneous table structures, diverse table contents, and erratic use of ruling lines. To name a few concrete examples, in financial reports and technical publications, some borderless tables may have complex hierarchical header structures, contain many multi-line, empty or spanned cells, or have large blank spaces between neighboring columns. In forms, some tables may be embedded in other more complex tabular objects (e.g., nested tables) and some neighboring tables may be very close to each other which makes it hard to determine whether they should be merged or not. In invoices, tables may have different sizes, e.g., some key-value pairs composed tables may contain only two rows/columns and some line-item tables may span multiple pages. Sometimes, some objects in document images like figures, graphics, code listings, structurally laid out text, or flow charts may have similar textures as tables, which poses another significant challenge for successful detection of tables and reduction of false alarms. To make matters worse, many scanned or camera-captured document images are of poor image quality, and tables contained in them may be distorted (even curved) or contain artifacts or noises.&amp;nbsp; Existing table extraction solutions fall short of extracting tables from such document images with high accuracy, which has prevented workflow applications from effectively leveraging this technology.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246198i47D4AA2499B81261/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 2.png" alt="Table Blog 2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Form Recognizer Table extraction &lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;In recent years, the success of deep learning in various computer vision applications has motivated researchers to explore deep neural networks like convolutional neural networks (CNN) or graph neural networks (GNN) for detecting tables and recognizing table structures from document images. With these new technologies, the capability and performance of modern table extraction solutions have been improved significantly.&lt;/P&gt;
&lt;P&gt;In the latest release of Form Recognizer, we created a state-of-the-art table extraction solution with cutting-edge deep learning technology. After validating that Faster/Mask R-CNN based table detectors are effective in detecting a variety of tables (e.g., bordered or borderless tables, tables embedded in other more complex tabular objects, and distorted tables) in document images robustly, we further proposed a new method to improve the localization accuracy of such detectors, and achieved state-of-the-art results on the &lt;A href="https://github.com/cndplab-founder/ICDAR2019_cTDaR" target="_blank" rel="noopener"&gt;ICDAR-2019 cTDaR table detection benchmark dataset&lt;/A&gt; by only using a lightweight ResNet18 backbone network (Table 1).&lt;/P&gt;
&lt;P&gt;For the challenge of table recognition or table cell extraction, we leveraged existing CNN/GNN based approaches, which have proven to be robust to complex tables like borderless tables with complex hierarchical header structures and multi-line/empty/spanned cells. We further enhanced them to deal with distorted or even slightly curved tables in camera-captured document images, making the algorithm more widely applicable to different real-world scenarios. Figure 1 below shows a few examples to demonstrate such capabilities.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246199i494D4508F59AFFD0/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 3.png" alt="Table Blog 3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Easy and Simple to use&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Try it out with the &lt;A href="https://fott-preview.azurewebsites.net/layout-analyze" target="_blank" rel="noopener"&gt;Form Recognizer Sample Tool.&amp;nbsp;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 5.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246263i813A5841DE546599/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 5.png" alt="Table Blog 5.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Extracting tables from documents is as simple as 2 API calls, no training, preprocessing, or anything else needed. Just call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync" target="_blank" rel="noopener"&gt;Analyze Layout&amp;nbsp;operation&lt;/A&gt; with your document (image, TIFF, or PDF file) as the input and extracts the text, tables, selection marks, and structure of the document.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;: &lt;STRONG&gt;The Analyze Layout Operation – &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;https://{endpoint}/formrecognizer/v2.1-preview.2/layout/analyze&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The Analyze Layout call returns a response header field called&amp;nbsp;Operation-Location. The&amp;nbsp;Operation-Location&amp;nbsp;value is a URL that contains the Result ID to be used in the next step.&lt;/P&gt;
&lt;P&gt;Operation location - &lt;BR /&gt;&lt;EM&gt;https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;: &lt;STRONG&gt;The Get Analyze Layout Result Operation –&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once you have the operation location call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeLayoutResult" target="_blank" rel="noopener"&gt;Get Analyze Layout Result&lt;/A&gt;&amp;nbsp;operation. This operation takes as input the Result ID that was created by the Analyze Layout operation.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;https://{endpoint}/formrecognizer/v2.1-preview.2/layout/analyzeResults/{resultId}&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The output of the Get Analyze Layout Results will provide a JSON output with the extracted table – rows, columns, row span, col span, bounding box and more.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 4.jpg" style="width: 382px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246200i7B5CF06774E40D85/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 4.jpg" alt="Table Blog 4.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started &lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;To get started create a Form Recognizer resource in the &lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt; and try out your tables in the &lt;A href="https://fott-preview.azurewebsites.net/layout-analyze" target="_blank" rel="noopener"&gt;Form Recognizer Sample Tool&lt;/A&gt;. You can also use the &lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=ga%2Cv2-0&amp;amp;pivots=programming-language-rest-api" target="_blank" rel="noopener"&gt;Form Recognizer client library or REST API.&lt;/A&gt;&lt;/SPAN&gt;&lt;BR /&gt;Note tables output is included in all parts of the Form Recognizer service – prebuilt, layout and custom in the JSON output pageResults section.&lt;/LI&gt;
&lt;LI&gt;For additional questions please reach out to us at&amp;nbsp;&lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 14 Jan 2021 19:03:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011</guid>
      <dc:creator>NetaH</dc:creator>
      <dc:date>2021-01-14T19:03:46Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2022696#M140</link>
      <description>&lt;P&gt;how&amp;nbsp; to remove short answer after publishing qna maker KB?&lt;/P&gt;</description>
      <pubDate>Wed, 30 Dec 2020 15:02:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2022696#M140</guid>
      <dc:creator>manishagole96</dc:creator>
      <dc:date>2020-12-30T15:02:02Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2019590#M139</link>
      <description>&lt;P&gt;I agree with&amp;nbsp;&lt;LI-USER uid="881335"&gt;&lt;/LI-USER&gt;&amp;nbsp;this is really problematic to waste one index for test per KB. If I were to create 50 actual KB's I would have to take a higher tier Search just for Text Indexes. Please think about it. I am already running QnA in 5 different languages and many more to be added next year.&lt;/P&gt;&lt;P&gt;&lt;LI-USER uid="590027"&gt;&lt;/LI-USER&gt;&amp;nbsp;Is it possible to bring this in GA sooner in Q1. It would save a huge lot of deal with managing several resources. Additionally West Europe region is missing in Preview, can it be added?&lt;/P&gt;&lt;P&gt;Just so you know I have adopted QnA Maker since it was in Preview in early 2018.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2020 07:24:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/2019590#M139</guid>
      <dc:creator>amitc2021</dc:creator>
      <dc:date>2020-12-29T07:24:07Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2012004#M138</link>
      <description>&lt;P&gt;Turkish Ahmet sounds like an evening news anchor in NTV news channel of Turkey, lol :)&lt;/img&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 23 Dec 2020 10:51:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2012004#M138</guid>
      <dc:creator>ozanyasindogan</dc:creator>
      <dc:date>2020-12-23T10:51:19Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2004797#M137</link>
      <description>&lt;P&gt;So great to see the huge list and choices, I clicked each one to get a feel of it!! One observation barring few dialects e.g., English(India, Ireland, Canada), Hindi, Telugu, Solvenian most of the other felt as if it's being spoken very fast as if a fight, or debate; is there a way to listen the same tad slower?&lt;/P&gt;</description>
      <pubDate>Sun, 20 Dec 2020 08:43:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2004797#M137</guid>
      <dc:creator>Balaji Mishra</dc:creator>
      <dc:date>2020-12-20T08:43:18Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2004325#M136</link>
      <description>&lt;P&gt;Are you planning to include Uyghur language (ug-CN) soon? It is one of major languages in China and central Asian republics.&lt;/P&gt;</description>
      <pubDate>Sat, 19 Dec 2020 19:17:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2004325#M136</guid>
      <dc:creator>Abduxukur Abdurixit</dc:creator>
      <dc:date>2020-12-19T19:17:48Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2001406#M135</link>
      <description>&lt;P&gt;does this mean Teams can transcribe meetings live in these languages? if not, is that expected soon?&lt;/P&gt;</description>
      <pubDate>Fri, 18 Dec 2020 14:56:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/2001406#M135</guid>
      <dc:creator>urido</dc:creator>
      <dc:date>2020-12-18T14:56:40Z</dc:date>
    </item>
    <item>
      <title>Bot Framework Composer 1.3 is now available!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/bot-framework-composer-1-3-is-now-available/ba-p/1996923</link>
      <description>&lt;P&gt;This week, as the year draws to a close, we are excited to announce that Bot Framework Composer 1.3 is now available to download. Composer has come a long way since we made the product GA (generally available) at the Microsoft Build conference earlier this year and this is our biggest release yet, adding many significant capabilities and making building sophisticated bots and virtual assistants even easier!&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;New features to improve the developer experience and workflow&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;For developers who are working with Bot Framework Skills today, you will know that developing multiple bots locally that work together can sometimes be a challenge, especially when it comes to setting up debugging. In Composer 1.3, we have now added a multi-bot authoring and management experience to transform this scenario, adding the capability to create, manage and test multiple bots within a single project. With a single click, you can now start all local bots for debugging, enabling you to test your root (parent) bot, connected to one or more skills with no additional manual configuration needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another significant enhancement is for the provisioning feature, which previously required developers to leave Composer and run a PowerShell script, copying back a resulting configuration into Composer. Now though, the provisioning process has been overhauled and users can now login to Azure, provision required resources and subsequently publish bots, all within the Composer environment!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="provisioning.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241314iB969EFD59180320E/image-size/large?v=v2&amp;amp;px=999" role="button" title="provisioning.PNG" alt="provisioning.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Additionally, we have implemented a new settings experience, providing an improved interface, removing the need to manually edit the underlying JSON for common settings, whilst retaining the ability to make changes or add additional configuration manually if you need to.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Localization&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to the existing capability for developers to localize their bots, multilingual support has now been added to the Composer UI! You can now choose from a long list of available languages within the Application Settings pane to change the language displayed within Composer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="languages.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241315i7E753590FE83785A/image-size/large?v=v2&amp;amp;px=999" role="button" title="languages.png" alt="languages.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Preview features&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of version 1.3, you can now choose to enable one or more preview features by choosing preview feature flags within the Composer settings page. These features are designed to give you early access and a chance to try what we are working on right now for future Composer releases. The following preview feature flags are now available.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;U&gt;Form Dialogs&lt;/U&gt; – Automatically generate a sophisticated dialog by simply providing the properties that you would like customers to provide as part of the conversation, with Composer then generating the appropriate dialog, language understanding (to enable dis-ambiguation and interruption scenarios) and bot responses (.lg files) assets.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Orchestrator&lt;/U&gt; – A new top-level recognizer which can help to arbitrate (dispatch) between multiple LUIS and QnA Maker models to ensure accurate routing of user requests to the appropriate language model or skill.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Package Manager&lt;/U&gt; – Developers can now discover and install packages from NuGet / NPM that contain re-usable assets, including dialogs, custom actions and .LG (language generation) files, that can be utilized by their bots. Once installed, assets contained within a package become available for use within a bot. Moving forward, we will provide guidance for how you can create and publish your own packages (including to internal feeds if desired), as well as making available a number of packages covering common scenarios that will ship with Composer.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="package-manager.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241316i73318D9B10036745/image-size/large?v=v2&amp;amp;px=999" role="button" title="package-manager.PNG" alt="package-manager.PNG" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Conversational core template&lt;/U&gt; – Built on the new package capabilities, surfaced via the preview of the Package Manager, we are developing a new component model for bot development using re-usable building blocks (packages). With this preview, users can create a bot using the new conversational core template which consists of a configurable runtime that can be extended with packages or importing additional skills.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Help us to improve Composer!&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;Within this release we have enabled the ability for users of Composer to opt in to sending usage information to us, to allow us to better understand how Composer is used. As we gather this telemetry, we can use it as an additional signal to help us prioritize our efforts in future releases and ensure we are focusing on the right features. You can help us by opting into providing usage data via the Data Collection section of the Composer settings page.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, a huge thank you to all of our users for your support and feedback during 2020 - we are excited to bring more significant updates to you as we move into 2021. Happy Holidays to everyone from the entire Conversational AI team!&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2020 10:42:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/bot-framework-composer-1-3-is-now-available/ba-p/1996923</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-12-17T10:42:45Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1996585#M133</link>
      <description>&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2020 08:21:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1996585#M133</guid>
      <dc:creator>HotCakeX</dc:creator>
      <dc:date>2020-12-17T08:21:42Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1996145#M132</link>
      <description>&lt;P&gt;&lt;LI-USER uid="310193"&gt;&lt;/LI-USER&gt;&amp;nbsp;thank you for your feedback! Persian is also considered in our backlog, please stay tune!&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2020 02:26:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1996145#M132</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2020-12-17T02:26:28Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1995085#M131</link>
      <description>&lt;P&gt;Hi, I noticed there is no support for Persian language, it's spoken by roughly +100 millions people in the world, I see in the list languages less common are supported.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Dec 2020 18:59:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1995085#M131</guid>
      <dc:creator>HotCakeX</dc:creator>
      <dc:date>2020-12-16T18:59:22Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1993644#M130</link>
      <description>&lt;P&gt;Question for the team. I did migrate some bots in production to the preview. Works as a charm and takes away a lot of resources from the list.&lt;/P&gt;&lt;P&gt;I did notice that the preview adds a test index to each database. So for each knowledge base you add, two indexes are used used in de search service (in stead of using one testbk index in the old situation)&amp;nbsp; Is this by design? And if so, what is the rationel behind it. This does have a very negative effect on operational costs.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Dec 2020 14:47:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1993644#M130</guid>
      <dc:creator>HesselW</dc:creator>
      <dc:date>2020-12-16T14:47:43Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1991957#M129</link>
      <description>&lt;P&gt;&lt;LI-USER uid="878040"&gt;&lt;/LI-USER&gt;&amp;nbsp;thank you for your feedback! Belgium Dutch (nl-BE) is considered in future release, please stay tune!&lt;/P&gt;</description>
      <pubDate>Wed, 16 Dec 2020 02:06:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1991957#M129</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2020-12-16T02:06:08Z</dc:date>
    </item>
    <item>
      <title>Re: Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1989430#M128</link>
      <description>&lt;P&gt;The Dutch voices are typical voices for the Netherlands. In Flanders / Belgium we have the same language but very different pronunciation. A 'nl-BE' voice would be very welcome.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Dec 2020 16:34:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/bc-p/1989430#M128</guid>
      <dc:creator>backnext</dc:creator>
      <dc:date>2020-12-15T16:34:36Z</dc:date>
    </item>
    <item>
      <title>Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/ba-p/1988418</link>
      <description>&lt;P&gt;&lt;EM&gt;This post was co-authored with Qinying Liao, Sheng Zhao, Gang Wang, Yueying Liu&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&amp;nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;close to human-parity&lt;/A&gt;. &amp;nbsp;Since its launch, we have seen it widely adopted in a variety of scenarios by many Azure customers, from voice assistants to audio content creation. More and more customers are asking for richer and more diverse choices of synthetic voices for different use cases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Azure Neural TTS has added 51 new voices for a total of 129 neural voices across 54 languages/locales. With this release, we provide at least one male and one female voice for customers to choose in each language/locale. &amp;nbsp;In total, Azure TTS now enables developers to reach millions more people with more than 200 voices available in standard and neural TTS.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;What's new&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS has now been extended to support 51 new &lt;EM&gt;voices, &lt;/EM&gt;which will bring to you the capability to have both male and female voices in each language for your apps&lt;EM&gt;.&lt;/EM&gt; You can hear samples of the voices below, or try them with your own text in &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features?ocid=AID3027325" target="_blank" rel="noopener"&gt;our demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;46 new voices are generally available&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In total 46 new voices are released across the 49 locales that are generally available in the Azure data centers/regions that support neural TTS (see the full list of &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;Azure regions&lt;/A&gt; here).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 80%;" width="80%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="180" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="690" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="880" class="lia-align-center" style="width: 250px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample audio&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ar-EG&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Arabic (Egypt)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ShakirNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;البركان هو أكثر ما في الطبيعــة إثارة للرهبة&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ar-EG%20Shakir.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ar-SA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Arabic (Saudi Arabia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HamedNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;الناس مَعادن، تصدأ بالملل، وتتمدد بالأمل، وتنكمش بالألم&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ar-SA%20Hamed.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;bg-BG&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Bulgarian (Bulgaria)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;BorislavNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Шофьорът задължително трябва да вземе експерт за второ мнение, за да провери дали всички системи на автомобила работят нормално.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/bg-BG%20Borislav.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ca-ES&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Catalan (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;EnricNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Les activitats docents tenen lloc al campus del Poblenou.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ca-ES%20Enric.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ca-ES&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Catalan (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JoanaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;L'artista està considerat com el pintor de les multituds.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ca-ES%20Joana.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;cs-CZ&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Czech (Czech)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AntoninNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Opravdový zasvěcenec ví, že nejmocnějším tajemstvím je to, které nemá žádný obsah.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/cs-CZ%20Antonin.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;da-DK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Danish (Denmark)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JeppeNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;61 procent af de kandidatstuderende er kvinder.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/da-DK%20Jeppe.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;de-AT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;German (Austria)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Das ist das letzte lange Pfingstwochenende für Schülerinnen und Schüler.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-AT%20Jonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;de-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;German (Switzerland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Eine Person, die sich bei Brandausbruch im oberen Stock aufgehalten hat, hat sich noch rechtzeitig in Sicherheit bringen können.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-CH%20Jan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;el-GR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Greek (Greece)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NestorasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Συγκλονιστικές εξελίξεις και ανατροπές στα επόμενα επεισόδια .&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/el-GR%20Nestoras.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-CA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (Canada)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;LiamNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;He had held the position since 2010.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/en-CA%20Liam.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ConnorNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Life is short, think before you talk.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/en-IE%20Connor.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;PrabhatNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Sometimes you can see snow on the mountains.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/en-IN%20Prabhat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fi-FI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Finnish (Finland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HarriNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Yhtiö kertoi loppuvuoden tuloksestaan ennakkotietoja.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fi-FI%20Harri.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fi-FI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Finnish (Finland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SelmaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Hevoset ovat uljaita ja nopeita eläimiä.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fi-FI%20Selma.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fr-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;French (Switzerland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FabriceNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;La Suisse comptera 5,6 millions (12%) de personnes actives en 2050.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fr-CH%20Fabrice.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;he-IL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hebrew (Israel)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AvriNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;הוא אמר שהמספרים מדאיגים בשל עצמם, אבל בכל הישיבות שלנו המסקנה היא שזה סימפטום למשהו רחב יותר.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/he-IL%20AvriNeural.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hi-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hindi (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MadhurNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;सिद्धार्थ ने भी शहनाज के साथ इस इवेंट की फोटो शेयर की है।&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/hi-IN%20Madhur.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hr-HR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Croatian (Croatia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SreckoNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Video je pregledan gotovo 70 tisuća puta, a neki od obožavatelja su mu u komentarima pisali kako ih je motivirao.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/hr-HR%20Srecko.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hu-HU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hungarian (Hungary)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;TamasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;A lakóhelyem nagyon komfortos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/hu-HU%20Tamas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;id-ID&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Indonesian (Indonesia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;GadisNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Inflasi ringan terjadi apabila kenaikan harga berada di bawah angka 10% setahun.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/id-ID%20Gadis.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ms-MY&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Malay (Malaysia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;OsmanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Setiap individu perlu memakai topeng muka ketika berada di luar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ms-MY%20Osman.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nb-NO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Norwegian (Bokmål, Norway)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FinnNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Jansson forteller at den svenske øya tar imot rundt 8000 besøkende fra Norge årlig.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nb-NO%20Finn.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nb-NO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Norwegian (Bokmål, Norway)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;PernilleNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;For en fantastisk forestilling!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nb-NO%20Pernille.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nl-NL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Dutch (Netherlands)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FennaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;De&amp;nbsp;afstand tussen Rotterdam en Breda&amp;nbsp;is ongeveer 45 km.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nl-NL%20Fenna.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nl-NL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Dutch (Netherlands)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MaartenNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zij heeft haar studie al een tijdje geleden afgerond.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nl-NL%20Maarten.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pl-PL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Polish (Poland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AgnieszkaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;To już nie będzie to samo, będzie drożej.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pl-PL%20Agnieszka.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pl-PL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Polish (Poland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MarekNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Na wszelki wypadek sprawdź, czy coś cię jednak nie zaskoczy.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pl-PL%20Marek.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pt-PT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Portuguese (Portugal)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;DuarteNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Para a aprovação do exame, tenho de ter pelo menos 80% das respostas corretas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pt-PT%20Duarte.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pt-PT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Portuguese (Portugal)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;RaquelNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;A minha mãe ensinou-me que devo ter respeito por todos, mas principalmente pelos mais velhos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pt-PT%20Raquel.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ro-RO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Romanian (Romania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;EmilNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Actul normativ se axează pe instituirea de măsuri active, 41,5 % din salariul de bază la revenirea din șomaj tehnic.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ro-RO%20Emil.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ru-RU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;DmitryNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Ранее посольство требовало от агентства опровержения статьи о количестве больничных коек в России.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ru-RU%20Dmitry.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ru-RU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SvetlanaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Изменений в организме людей, попробовавших еду без приправ, не произошло.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ru-RU%20Svetlana.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sk-SK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Slovak (Slovakia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;LukasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zápis 45 % je v skutočnosti iba skratka pre zlomok.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sk-SK%20Lukas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sl-SI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Slovenian (Slovenia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;RokNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zloraba bonov in dvigovanje cen turističnih storitev je nesprejemljivo ravnanje.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sl-SI%20Rok.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sv-SE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Swedish (Sweden)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MattiasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Båda lagen bjöd på riktigt bra hockey och skapade flera riktigt bra målchanser.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sv-SE%20Mattias.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sv-SE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Swedish (Sweden)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SofieNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Det fanns ingen trafik runt torget.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sv-SE%20Sofie.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ta-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Tamil (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ValluvarNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;எவ்வளவு அருமையான பாடல் அது!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/ta-IN%20Valluvar.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;te-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Telugu (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MohanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;అబ్బ, ఎంత పెద్ద భవనమో!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/te-IN%20Mohan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;th-TH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Thai (Thailand)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NiwatNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;ธุรกิจขายอาหารเป็นธุรกิจที่ได้รับความนิยมมากที่สุด&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/th-TH%20Niwat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;tr-TR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Turkish (Turkey)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AhmetNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Sosyal mesafeye büyük ölçüde riayet eden çocuklar, başta mahalle parkları olmak üzere sahiller ve oyun parklarında enerji attı.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/tr-TR%20Ahmet.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;vi-VN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Vietnamese (Vietnam)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NamMinhNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Nhiệt độ hiện tại ở thành phố Hồ Chí Minh là 38 độ C.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/vi-VN%20NamMinh.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-HK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Cantonese, Traditional)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HiuMaanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;抗疫舉措成為安全重啟經濟的重要一環。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-HK%20HiuMaan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-HK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Cantonese, Traditional)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;WanLungNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;隨着疫情緩和，愈來愈多人回到辦公室上班，但是很多人仍想留在家中工作（work from home）。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-HK%20WanLung.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-TW&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Taiwanese Mandarin)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HsiaoChenNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;賭博的勝率應該不到50%。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-TW%20HsiaoChen.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-TW&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Taiwanese Mandarin)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;YunJheNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;台北車站大廳能不能坐，連日引發正反意見。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-TW%20YunJhe.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;5 new voices are in public preview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have also added 5 male voices in the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;5 low-resource languages &lt;/A&gt;that have been supported since November. These voices are available in public preview in&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;three Azure regions&lt;/A&gt;: EastUS, SouthEastAsia and WestEurope.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the samples below:&lt;/P&gt;
&lt;TABLE style="width: auto;" width="auto"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="57" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;V&lt;/STRONG&gt;&lt;STRONG&gt;oice &lt;/STRONG&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216" class="lia-align-center" style="width: 250px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample audio&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;KertNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Ametlikku meetodit sellise pettuse avastamiseks ei olegi olemas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/et-EE%20Kert.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;ColmNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Ritheadh próiseas comhairliúcháin faoin scéal sa bhfómhar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/ga-IE%20Colm.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;LeonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Aišku, anksčiau ar vėliau paaiškės tos priežastys.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lt-LT%20Leonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;NilsNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Aizvadīto gadu uzņēmums noslēdzis ar 6,3 miljonu eiro zaudējumiem.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lv-LV%20Nils.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;JosephNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Anki tfajjel tal-primarja jaf li l-popolazzjoni tikber fejn hemm il-prosperità.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/mt-MT%20Joseph.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this release, we now support a total of 129 neural voices across 54 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech" target="_blank" rel="noopener"&gt;Language support - Speech service - Azure Cognitive Services | Microsoft Docs&lt;/A&gt; for the full language and voice list.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="map for blog (2).png" style="width: 899px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/240968iE8FCBB4F388639C2/image-size/large?v=v2&amp;amp;px=999" role="button" title="map for blog (2).png" alt="map for blog (2).png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Continuous voice quality improvement&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In general, Neural TTS can convert text to lifelike speech, however there are nuances that can always be improved. For example, we have customers who have requested the ability to support a scenario where Katja, our de-DE neural voice, can pronounce English words in the context of a German sentence. This was valuable feedback, and we anticipate a similar need across languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For German, we observed that our users prefer the voice to handle an English word/phrase as close as the native English pronunciation. To enable a voice model to speak English as a second language, it is normally required that we collect the speech data of the same speaker speaking English besides his/her native language. This is a big challenge as we do not have sufficient multi-language speech data from our German voice talents. By leveraging cross-lingual capability of &lt;SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;UNI-TTS&lt;/A&gt;&lt;/SPAN&gt;, we are able to generate more English pronunciation data with the transferred voice from our German voice talent. Such data is used to improve the quality of the English word/phrase pronunciations for the German Katja voice, so Katja can pronounce English words in a more natural way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;CMOS&lt;/A&gt; metric is used to measure the improvement of the English word pronunciation for Katja. Below table shows that the updated model is significantly better in pronouncing English words in the context of a German sentence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: auto;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="160px" class="lia-align-center"&gt;&lt;STRONG&gt;Script&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;&lt;STRONG&gt;Old&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160px" style="width: 200px;"&gt;Star Wars - Das Erwachen der Macht&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00026-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00026-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160px"&gt;&lt;SPAN&gt;Three&lt;/SPAN&gt; Billboards outside Ebbing, Missouri.&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00037-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00037-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;This improvement has now been released to the Azure Neural TTS service for Katja. Moving forward, we’ll extend this capability to support more languages.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Tell us your experience!&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you’re building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let us know how you are using or plan to use Neural TTS voices in this &lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener"&gt;form&lt;/A&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing your experience and developing more compelling services together with you for the developers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Add voice to your app in 15 minutes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Explore the available voices in this demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk#optional-change-the-language-and-bot-voice" target="_blank" rel="noopener"&gt;Build a voice-enabled bot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;Deploy Azure TTS voices on prem with Speech Containers&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Build your custom voice&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Dec 2020 06:46:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/ba-p/1988418</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2020-12-16T06:46:34Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1975859#M124</link>
      <description>&lt;P&gt;Whom can I reach out to, for questions and bugs?&lt;/P&gt;</description>
      <pubDate>Thu, 10 Dec 2020 12:32:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1975859#M124</guid>
      <dc:creator>aowens-jmt</dc:creator>
      <dc:date>2020-12-10T12:32:34Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1961557#M123</link>
      <description>&lt;P&gt;&lt;LI-USER uid="881335"&gt;&lt;/LI-USER&gt;&amp;nbsp;Glad that you liked our new offering. We are still working on the pricing and yes it will mostly be aligned to the current pricing model or even slightly simpler.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-USER uid="345890"&gt;&lt;/LI-USER&gt;&amp;nbsp;Thanks a lot. May be the issue is with the kind of structure/formatting those PDF files contains as our extraction currently works on the formatting structure of the semi-structured files. Could you please drop us a mail at &lt;A href="mailto:qnamakerteam@microsoft.com" target="_blank"&gt;qnamakerteam@microsoft.com&lt;/A&gt;&amp;nbsp;so that we can debug this, we will be happy to help.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 07 Dec 2020 08:02:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1961557#M123</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2020-12-07T08:02:38Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1954528#M122</link>
      <description>&lt;P&gt;Looks great!&amp;nbsp; We are trying to propose QnA maker in a customer's environment to help index a lot (hundreds) of documents.&amp;nbsp; However, in our testing, QnA maker is having problems reading (ingesting) many of the pdf documents.&amp;nbsp; Is this a common issue?&amp;nbsp; Should we use Azure Search instead?&lt;/P&gt;</description>
      <pubDate>Thu, 03 Dec 2020 22:26:50 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1954528#M122</guid>
      <dc:creator>PeytonMcM</dc:creator>
      <dc:date>2020-12-03T22:26:50Z</dc:date>
    </item>
    <item>
      <title>Meta-data driven key-value pairs extraction with Azure Form Recognizer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/meta-data-driven-key-value-pairs-extraction-with-azure-form/ba-p/1942595</link>
      <description>&lt;P&gt;Most organizations are now aware of how valuable the forms (pdf, images, videos…) they keep in their closets are. They are looking for best practices and most cost-effective ways and tools to digitize those assets. &amp;nbsp;By extracting the data from those forms and combining it with existing operational systems and data warehouses, they can build powerful AI and ML models to get insights from it to deliver value to their customers and business users.&lt;/P&gt;
&lt;P&gt;With the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview" target="_blank" rel="noopener"&gt;Form Recognizer Cognitive Service&lt;/A&gt;, we help organizations to harness their data, automate processes (invoice payments, tax processing …), save money and time and get better accuracy.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 1-Typical form.png" style="width: 921px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236730i90EB2354726E1A31/image-size/large?v=v2&amp;amp;px=999" role="button" title="Figure 1-Typical form.png" alt="Figure 1-Typical form.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 1:Typical form&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my first blog about the automated form processing, I described how you can extract key-value pairs from your forms in real-time using the Azure Form Recognizer cognitive service. We successfully implemented that solution for many customers.&lt;/P&gt;
&lt;P&gt;Often, after a successful PoC or MVP, our customers realize that, not only they need this real time solution but, they also have a huge backlog of forms they would like to ingest into their relational, NoSQL databases or data lake, in a batch fashion. They have different types of forms and they don’t want to build a model for each type. They are also looking for easy and quick way to ingest the new type of forms.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll describe how to dynamically train a form recognizer model to extract the key-value pairs of different type of forms and at scale using Azure services. We’ll also share a github repository where you can download the code and implement the solution we describe in this post.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The backlog of forms maybe in your on-premises environment or in a (s)FTP server. We assume that you were able to upload them into an Azure Data Lake Store Gen 2, using &lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal" target="_blank" rel="noopener"&gt;Azure Data Factory&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows" target="_blank" rel="noopener"&gt;Storage Explorer&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs" target="_blank" rel="noopener"&gt;AzCopy&lt;/A&gt;. Therefore, the solution we’ll describe here will focus on the data ingestion from the data lake to the (No)SQL database.&lt;/P&gt;
&lt;P&gt;Our product team published a great tutorial on how to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;Train a Form Recognizer model and extract form data by using the REST API with Python&lt;/A&gt;. The solution described here demonstrates the approach for one model and one type of forms and is ideal for real-time form processing.&lt;/P&gt;
&lt;P&gt;The value-add of the post is to show how to automatically train a model with new and different type of forms using a meta-data driven approach, in batch mode.&lt;/P&gt;
&lt;P&gt;Below is the high-level architecture.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 2 - High Level Architecture.png" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236732i2CE7C4C2187D161A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Figure 2 - High Level Architecture.png" alt="Figure 2 - High Level Architecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 2:&amp;nbsp; High Level Architecture&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Azure services required to implement this solution&lt;/H2&gt;
&lt;P&gt;To implement this solution, you will need to create the below services:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Form Recognizer resource:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;Form Recognizer resource&amp;nbsp;to setup and configure the form recognizer cognitive service, get the API key and endpoint URI.&lt;/P&gt;
&lt;H3&gt;Azure SQL single database:&lt;/H3&gt;
&lt;P&gt;We will create a meta-data table in Azure SQL Database. This table will contain the non-sensitive data required by the Form Recognizer Rest API. The idea is, whenever there is a new type of form, we just insert a new record in this table and trigger the training and scoring pipeline.&lt;BR /&gt;The required attributes of this table are:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;form_description: This field is not required as part of the training of the model the inference. It just to provide a description of the type of forms we are training the model for (example client A forms, Hotel B forms,...)&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;training_container_name: This is the storage account container name where we store the training dataset. It can be the same as scoring_container_name&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;training_blob_root_folder: The folder in the storage account where we’ll store the files for the training of the model.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;scoring_container_name: This is the storage account container name where we store the files we want to extract the key value pairs from.&amp;nbsp; It can be the same as the training_container_name&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;scoring_input_blob_folder: The folder in the storage account where we’ll store the files to extract key-value pair from.&lt;/LI&gt;
&lt;LI&gt;model_id: The identify of model we want to retrain. For the first run, the value must be set to -1 to create a new custom model to train. The training notebook will return the newly created model id to the data factory and, using a stored procedure activity, we’ll update the meta data table with in the Azure SQL database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Whenever you had a new form type, you need to reset the model id to -1 and retrain the model.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;file_type: The supported types are&amp;nbsp;application/pdf,&amp;nbsp;image/jpeg,&amp;nbsp;image/png,&amp;nbsp;image/tif.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;form_batch_group_id : Over time, you might have multiple forms type you train against different models. The form_batch_group_id will allow you to specify all the form types that have been training using a specific model.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Azure Key Vault:&lt;/H3&gt;
&lt;P&gt;For security reasons, we don’t want to store certain sensitive information in the parametrization table in the Azure SQL database. We store those parameters in Azure Key Vault secrets.&lt;/P&gt;
&lt;P&gt;Below are the parameters we store in the key vault:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CognitiveServiceEndpoint: The endpoint of the form recognizer cognitive service. This value will be stored in Azure Key Vault for security reasons.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;CognitiveServiceSubscriptionKey: The access key of the cognitive service. This value will be stored in Azure Key Vault for security reasons. The below screenshot shows how to get the key and endpoint of the cognitive service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Figure 3 - Cognitive Service Keys and Endpoint.png" style="width: 444px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236735iB3D7BA397AC96780/image-dimensions/444x210?v=v2" width="444" height="210" role="button" title="Figure 3 - Cognitive Service Keys and Endpoint.png" alt="Figure 3 - Cognitive Service Keys and Endpoint.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 3: Cognitive Service Keys and Endpoint&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;StorageAccountName: The storage account where the training dataset and forms we want to extract the key value pairs from are stored. The two storage accounts can be different. The training dataset must be in the same container for all form types. They can be in different folders.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;StorageAccountSasKey : the shared access signature of the storage account&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The below screen shows the key vault after you create all the secrets&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Figure 4 - Key Vault Secrets.png" style="width: 543px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236738i47D666152D7294EC/image-dimensions/543x242?v=v2" width="543" height="242" role="button" title="Figure 4 - Key Vault Secrets.png" alt="Figure 4 - Key Vault Secrets.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 4 : Key Vault Secrets&lt;/P&gt;
&lt;H3&gt;Azure Data Factory:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;To orchestrate the training and scoring of the model. Using a look up activity, we’ll retrieve the parameters in the Azure SQL Database and orchestrate the training and scoring of the model using Databricks notebooks. All the sensitive parameters stored in Key vault will be retrieve in the notebooks.&lt;/P&gt;
&lt;H3&gt;Azure Data Lake Gen 2:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;To store the training dataset and the forms we want to extract the key-values pairs from. The training and the scoring datasets can be in different containers but, as mentioned above, the training dataset must be in the same container for all form types.&lt;/P&gt;
&lt;H3&gt;Azure Databricks:&lt;/H3&gt;
&lt;P&gt;To implement the python script to train and score the model. Note that we could have used Azure functions.&lt;/P&gt;
&lt;H3&gt;Azure Key Vault:&lt;/H3&gt;
&lt;P&gt;To store the sensitive parameters required by the Form Recognizer Rest API.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The code to implement this solution is available in the following &lt;A href="https://github.com/issaghaba/Meta-data-driven-key-value-pairs-extraction-with-Azure-Form-Recognizer" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Get started with deploying Form Recognizer –&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Custom Model&lt;/STRONG&gt;&amp;nbsp;– extract text, tables and key value pairs&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model and extract form data by using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model with labels using the sample labeling tool&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Form Recognizer Sample Labeling Tool&amp;nbsp;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Try it out:&amp;nbsp;&lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;https://fott.azurewebsites.net/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Open Source project:&amp;nbsp;&lt;A href="https://github.com/microsoft/OCR-Form-Tools" target="_blank" rel="noopener"&gt;https://github.com/microsoft/OCR-Form-Tools&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Prebuilt receipts -&amp;nbsp;&lt;/STRONG&gt;extract data from USA sales receipts&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts" target="_blank" rel="noopener"&gt;Quickstart: Extract receipt data using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Layout -&amp;nbsp;&lt;/STRONG&gt;extract text and table structure (row and column numbers) from your documents&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-layout" target="_blank" rel="noopener"&gt;Quickstart: Extract layout data using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;See&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/whats-new" target="_blank" rel="noopener"&gt;What’s New&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Nov 2020 23:07:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/meta-data-driven-key-value-pairs-extraction-with-azure-form/ba-p/1942595</guid>
      <dc:creator>IssaghaBa</dc:creator>
      <dc:date>2020-11-30T23:07:44Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1930628#M119</link>
      <description>&lt;P&gt;You totally made my day. Just started a chatbot for a customer in need of three languages in one bot. This will help a lot. I already liked the fact that I don't have to manually change key's to link the qna service to another search service :flexed_biceps:&lt;/img&gt;&lt;/P&gt;&lt;P&gt;I do hope that product marketing keeps the price in line with the 'old' approach&lt;/P&gt;</description>
      <pubDate>Wed, 25 Nov 2020 12:41:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1930628#M119</guid>
      <dc:creator>HesselW</dc:creator>
      <dc:date>2020-11-25T12:41:04Z</dc:date>
    </item>
    <item>
      <title>Introducing Asynchronous APIs for Text Analytics and Text Analytics for Health</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-asynchronous-apis-for-text-analytics-and-text/ba-p/1922422</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored with Sara Kandil&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are announcing a preview of new asynchronous (batch) APIs for Text Analytics and Text Analytics for health, which enable developers to apply Natural Language Processing (NLP) to even more scenarios so they can identify key phrases, entities and even personally identifiable information (PII).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Asynchronous Analyze API for&amp;nbsp;Text Analytics&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Text Analytics is a generally available&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Cognitive Service&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that lets you discover insights in text using Natural Language Processing (NLP). The service helps you identify key phrases and entities (people, place,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;organization&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;event, date among others&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;), recognize text that contains personal information (PII) and analyze sentiment (positive, neutral, or negative).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To date, customers have been using Text Analytics by making synchronous calls to the service’s REST API, client library SDK, or by using containers to run Text Analytics in their own environment.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Today, we are introducing a new preview Analyze operation for users to analyze larger documents asynchronously&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;combining multiple Text Analytics features in one call&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. This gives customers the flexibility to analyze more information, at once, when their applications don’t need a synchronous response. The new asynchronous Analyze operation for Text Analytics supports individual documents of up to 125k characters, and up to 25 documents in a request.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The Analyze operation preview supports key phrase extraction, named entity recognition and PII recognition and is available in 5 Azure regions (West US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, East US2, West Europe, North Europe,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Central US). Support for the rest of the Text Analytics capabilities and additional regions is coming soon.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Asynchronous Analyze API for&amp;nbsp;Text Analytics for health&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;also&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;introducing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a new&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;asynchronous hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;API for&amp;nbsp;&lt;/SPAN&gt;Text Analytics for&amp;nbsp;health.&amp;nbsp;&lt;SPAN data-contrast="none"&gt;As a refresher,&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;early this year (July), we&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/introducing-text-analytics-for-health/ba-p/1505152" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;announced&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a preview of Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth, a capability for the healthcare industry, trained to extract insights from medical data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;W&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ith Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth, users can:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;Detect words and phrases mentioned in unstructured text as entities that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;associated with semantic types in the healthcare and biomedical domain – such as diagnosis, medication name, symptom/sign, and more.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;Link entities to medical ontologies and domain-specific coding systems (for example, the&amp;nbsp;&lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://www.nlm.nih.gov/research/umls/sourcereleasedocs/index.html" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Unified Medical Language System&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;), and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;extract&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;meaningful connections between concepts mentioned in text (for example, finding the relationship between a medication name and the dosage associated with it.)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Detect negation&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;the different entities mentioned in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;text.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_0-1606250369423.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/235820iE5F39FFECB8D9B44/image-size/large?v=v2&amp;amp;px=999" role="button" title="wmendoza_0-1606250369423.png" alt="Example of Text Analytics for health at work." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Example of Text Analytics for health at work.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Previously, Text Analytics for&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;ealth was only available for use via containers. Th&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;is&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;new API gives users the option to use the hosted service and avoid the heavy lifting of hosting containers unless they need to.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;operation&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;supports document sizes up to 5k characters and up to 10 documents in a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;single&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;request. It is available for use in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;West US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, East US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, Central US,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;North&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;West Europe&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;regions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In summary, Text Analytics is now more accessible with more ways to use the capabilities depending on your scenario. You can:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;Call the synchronous endpoints to use the Text Analytics features.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Call the async&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hronous&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;Analyze API to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;process&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;larger documents&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;multiple&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;Text Analytics&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;features in a single call.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Call the&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;async&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hronous&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;ealth API if&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;dataset that is being analyzed&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;has&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;clinical and bio&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;medical&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;documents.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Use Text Analytics containers to host the endpoint in your own environments that meets your privacy and security requirements.&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;new Text Analytics asynchronous&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;APIs are available to use in Preview today. Please refer to our&amp;nbsp;&lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://aka.ms/TAforHealth-Docs" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;to learn&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;more a&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;nd&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;get started&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;with these new APIs&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=6vX3Us1TOw8&amp;amp;list=PLlrxD0HtieHi0mwteKBOfEeOYf0LJU4O1&amp;amp;index=1" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/6vX3Us1TOw8/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Nov 2020 21:35:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-asynchronous-apis-for-text-analytics-and-text/ba-p/1922422</guid>
      <dc:creator>AshlyYeo</dc:creator>
      <dc:date>2020-11-24T21:35:16Z</dc:date>
    </item>
    <item>
      <title>Re: Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1923589#M118</link>
      <description>&lt;P&gt;That‘s looks like more and more new functions release with REST API 3.1.&lt;img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@0277EEB71C55CDE7DB26DB254BF2F52B/images/emoticons/laugh_40x40.gif" alt=":lol:" title=":lol:" /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Nov 2020 13:49:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1923589#M118</guid>
      <dc:creator>Hao Hu</dc:creator>
      <dc:date>2020-11-23T13:49:58Z</dc:date>
    </item>
    <item>
      <title>Re: Computer Vision for spatial analysis at the Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/1922777#M117</link>
      <description>&lt;P&gt;I would also like to run the Spatial Analytics container on nvidia Jetson Nano dev kit - 4GB.&lt;/P&gt;</description>
      <pubDate>Mon, 23 Nov 2020 09:18:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/1922777#M117</guid>
      <dc:creator>hemantkamalakar</dc:creator>
      <dc:date>2020-11-23T09:18:17Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1917122#M115</link>
      <description>&lt;P&gt;It's wonderful! I can deploy very quickly and manage resources very easily! Thanks Team!&lt;/P&gt;</description>
      <pubDate>Fri, 20 Nov 2020 09:40:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1917122#M115</guid>
      <dc:creator>mikko1</dc:creator>
      <dc:date>2020-11-20T09:40:27Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1907941#M114</link>
      <description>&lt;P&gt;This is amazing !&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Nov 2020 16:59:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1907941#M114</guid>
      <dc:creator>ivanatilca</dc:creator>
      <dc:date>2020-11-19T16:59:40Z</dc:date>
    </item>
    <item>
      <title>Neural Text-to-Speech previews five new languages with innovative models in the low-resource setting</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604</link>
      <description>&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post is co-authored with Xianghao Tang, Lihui Wang, Jun-Wei Gan, Gang Wang,&amp;nbsp; Garfield He, Xu Tan and Sheng Zhao&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt; (Neural TTS),&amp;nbsp;part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. For example, the &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt;, &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Progressive&lt;/A&gt; and &lt;A href="https://aka.ms/MotorolaSolutions" target="_blank" rel="noopener"&gt;Motorola Solutions&lt;/A&gt; are using Azure Neural TTS to develop conversational interfaces for their voice assistants in English speaking locales. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Swisscom&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;Poste Italiane&lt;/A&gt; are adopting neural voices in French, German and Italian to interact with their customers in the European market. &lt;A href="https://customers.microsoft.com/en-us/story/cheetah-mobile-consumer-goods-azure-cognitive-services-china" target="_blank" rel="noopener"&gt;Hongdandan&lt;/A&gt;, a non-profit organization, is using neural voices in Chinese to make their online books audible for the blind people in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544" target="_blank" rel="noopener"&gt;September 2020&lt;/A&gt;, we extended Neural TTS to support 49 languages/locales with 68 voices. At the same time, we continue to receive customer requests for more voice choices and more language support globally.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Azure Neural TTS has extended its global support to five new languages: Maltese, Lithuanian, Estonian, Irish and Latvian, in public preview. At the same time, Neural TTS Container is generally available for customers who want to deploy neural voice models on-prem for specific security requirements. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Neural TTS previews 5 new languages&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Five new voices and languages are introduced to the Neural TTS portfolio. They are: Grace in Maltese (Malta), Ona in Lithuanian (Lithuania), Anu in Estonian (Estonia), Orla in Irish (Ireland) and Everita in Latvian (Latvia). These voices are available in public preview in &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;three Azure regions&lt;/A&gt;: EastUS, SouthEastAsia and WestEurope.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear samples of these voices, or try them with your own text in&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;our demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;Locale&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Language&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;Voice name&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Audio sample&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“mt-MT-GraceNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/mt-MT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“lt-LT-OnaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lt-LT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“et-EE-AnuNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/et-EE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“ga-IE-OrlaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Tá an scoil sa mbaile ar oscailt arís inniu.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ga-IE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“lv-LV-EveritaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lv-LV.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, Azure TTS service now supports 54 languages/locales with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;78 neural voices&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;77 standard voices&lt;/A&gt; available. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Behind the scenes: 10X faster voice building with the low resource setting.&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The creation of a TTS voice model normally requires a large volume of training data, especially for extending to a new language, where sophisticated language-specific engineering is required. In this section, we introduce “&lt;STRONG&gt;LR-UNI-TTS&lt;/STRONG&gt;”, a new Neural TTS production pipeline to create TTS languages where training data is limited, i.e., ‘low-resourced’. With this innovation, we are able to improve the Neural TTS locale development with 10x agility and support the five new languages quickly. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;High resource vs. low resource&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Traditionally, it can easily take more than 10 months to extend TTS service to support a new language due to the extensive language-specific engineering required. This includes collecting tens of hours of language-specific training data, and creating hand-crafted components like text analysis etc.. In many cases, one major challenge for supporting a new language is that such large volume of data is unavailable or hard to find, causing a language ‘low-resourced’ for TTS model building. &amp;nbsp;To handle the challenge, Microsoft researchers have proposed an innovative approach, called &lt;A href="https://arxiv.org/pdf/2008.03687.pdf" target="_blank" rel="noopener"&gt;LRSpeech&lt;/A&gt;, to handle the extremely low-resourced TTS development. It has been proved that LRSpeech has the capability to build good quality TTS in the low-resource setting, using multilingual pre-training, knowledge distillation, and importantly the dual transformation between text-to-speech (TTS) and speech recognition (SR).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How LR-UNI-TTS works&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Built on top of LRSpeech and the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;multi-lingual multi-speaker&lt;/A&gt; transformer TTS model (called UNI-TTS), we have designed the offline model training pipeline and the online inference pipeline for the low-resource TTS.&amp;nbsp; Three key innovations contribute to the significant agility gains with this approach.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First, by leveraging the parallel speech data (the pairing speech audios and the transcript) collected during the speech recognition development, the LR-UNI-TTS training pipeline greatly reduces the data requirements for refining the base model in the new language. Previously, the high-quality multi-speaker parallel data has been critical in extending TTS to support a new language. The TTS speech data is more difficult to collect as it requires the data to be clean, the speaker carefully selected, and the recording process well controlled to ensure the high audio quality.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Second, by applying the cross-lingual speaker transfer technology with the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;UNI-TTS&lt;/A&gt; pipeline, we are able to leverage the existing high-quality data in a different language to produce a new voice in the target language. &amp;nbsp;This saves the effort to find a new professional speaker for the new languages. Traditionally, the high-quality parallel speech data in the target language is required, which easily takes months for the voice design, voice talent selection, and recording.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Lastly, the LR-UNI-TTS approach uses characters instead of phonemes as the input feature to the models, while the high-resource TTS pipeline is usually composed of a multi-step text analysis module that turns text into phonemes, costing long time to build.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below figure describes the offline training pipeline for the low-resource TTS voice model.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_10" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="offline-training.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/234719i763495F6744FA7BC/image-size/large?v=v2&amp;amp;px=999" role="button" title="offline-training.png" alt="Figure 1. The offline training pipeline for the low-resource TTS voice model." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 1. The offline training pipeline for the low-resource TTS voice model.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In specific, at the offline training stage, we have leveraged a few hundred hours of the speech recognition data to further refine the UNI-TTS model. It can help the base model to learn more prosody and pronunciation patterns for the new locales. The speech recognition data is usually collected in daily environments using PC or mobile devices, unlike the TTS data which is normally collected in the professional recording studios. Although the SR data can be much lower-quality than the TTS data, we have found LR-UNI-TTS can benefit from such data effectively.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this approach, the high-quality parallel data in the new language which is usually required for the TTS voice training becomes optional. If such high-quality parallel data is available, it can be used as the target voice in the new language. &amp;nbsp;If no high-quality parallel data is available, we can also choose a suitable speaker from an existing but different language and transfer it into the new language through the cross-lingual speaker transfer-learning capability of UNI-TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the below chart, we describe the flow of the runtime.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="online-inference.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/234722i6DAF707A8F8BE2BE/image-size/large?v=v2&amp;amp;px=999" role="button" title="online-inference.png" alt="Figure 2: The online inference pipeline for the low-resource TTS voice model." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 2: The online inference pipeline for the low-resource TTS voice model.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_11" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;At the runtime, a lightweight text analysis is designed to preprocess the text input with sentence separation and text normalization. Compared to the text analysis component of the high-resource language pipelines, this module is greatly simplified. For instance, it does not include the pronunciation lexicon or letter-to-sound rules which are used in high-resource languages. The normalized text characters are generated by the lightweight text analysis component. During this process, we also leverage the text normalization rules from the speech recognition development, which saves the overall cost a lot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The other components are similar to the high-resource language pipelines. For example, the neural acoustic model uses the &lt;A href="https://arxiv.org/pdf/1905.09263.pdf" target="_blank" rel="noopener"&gt;FastSpeech&lt;/A&gt; model to convert the character input into mel-spectrogram.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, the neural vocoder &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;HiFiNet&lt;/A&gt; is used to convert the mel-spectrogram into audio output.&lt;/P&gt;
&lt;P&gt;Overall, using LR-UNI-TTS, &amp;nbsp;a TTS model in a new language can be built in about one month, which is 10x faster than the traditional approaches.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the next section, we share the quality measurement results for the voices built with LR-UNI-TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Quality assessments&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Similar to other TTS voices, the quality of the low-resource voices created in the new languages are measured using the Mean Opinion Score (MOS) tests and intelligibility tests. MOS is a widely recognized scoring method for speech naturalness evaluation. With MOS studies, participants rate speech characteristics such as sound quality, pronunciation, speaking rate, and articulation on a 5-point scale, and an average score is calculated for the report. Intelligibility test is a method to measure how intelligible a TTS voice is.&amp;nbsp; With intelligibility tests, judges are asked to listen to a set of TTS samples and mark out the unintelligible words to them.&amp;nbsp; Intelligibility rate is calculated using the percentage of the correctly intelligible words among the total number of words tested (i.e., the number of intelligible words/the total number of words tested * 100%).&amp;nbsp; Normally a usable TTS engine needs to reach a score of &amp;gt; 98% for intelligibility.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below table summarizes the MOS score and the intelligibility score of the five new languages created using LR-UNI-TTS&amp;nbsp; .&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;&lt;STRONG&gt;Language (Region)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;&lt;STRONG&gt;Average MOS&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;&lt;STRONG&gt;Intelligibility&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;3.59*&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;98.40%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.35&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.25%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.52&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;98.73%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.62&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.43%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.51&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.13%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;FONT size="2"&gt;* Note: MOS scores are subjective and not directly comparable across languages. The MOS of the mt-MT voice is relatively lower but reasonable in this case considering that the human recordings used as the training data for this voice also gots a lower MOS.&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As shown in the table, the voices created with the low resources available are highly intelligible and have achieved high or reasonable MOS scores among the native speakers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It’s worth pointing out that due to the nature of the lightweight text analysis module for the runtime, the phoneme-based SSML tuning capabilities are not supported for the low-resource voice models, for example, &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#use-phonemes-to-improve-pronunciation" target="_blank" rel="noopener"&gt;the ‘phoneme’ and the ‘lexicon’ elements&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Coming next: extending Neural TTS to even more locales&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;LR-UNI-TTS &amp;nbsp;has paved the way for us to extend Neural TTS to more languages for the global users more quickly. Most excitingly, LR-UNI-TTS can potentially be applied to preserve the languages that are disappearing in the world today, as pointed out in the guiding principles of &lt;A href="https://www.microsoft.com/en-us/research/blog/a-holistic-representation-toward-integrative-ai/" target="_blank" rel="noopener"&gt;XYZ-code&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the five new languages released in public preview, we welcome user feedback as we continue to improve the voice quality. &amp;nbsp;&lt;SPAN&gt;We are also interested to partner with passionate people and organizations to create TTS for more languages.&amp;nbsp; Contact us (&lt;/SPAN&gt;mstts[at]microsoft.com) for more details&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;What’s more: Neural TTS Container GA&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Together with the preview of these five new languages, we are happy to share that the Neural TTS Container is now GA. With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements.&amp;nbsp; Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;how to install Neural TTS Container &lt;/A&gt;&amp;nbsp;and visit the&amp;nbsp;&lt;A href="https://aka.ms/cscontainers-faq" target="_blank" rel="noopener"&gt;Frequently Asked Questions&lt;/A&gt;&amp;nbsp;on Azure Cognitive Services Containers.&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Get started&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers, supporting more flexible deployment. Azure Text-to-Speech service provides more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;150 voices in over 50 languages&lt;/A&gt; for developers all over the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Check out our &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 19 Nov 2020 16:30:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-11-19T16:30:01Z</dc:date>
    </item>
    <item>
      <title>How to operationalize more than 100 AI models in as little as 12 weeks using Azure Databricks</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-operationalize-more-than-100-ai-models-in-as-little-as-12/ba-p/1892062</link>
      <description>&lt;P&gt;Organizations are leveraging artificial intelligence (AI) and machine learning (ML) to derive insight and value from their data and to improve the accuracy of forecasts and predictions.&amp;nbsp;&lt;FONT style="background-color: #ffffff;"&gt;In rapidly changing environments, &lt;/FONT&gt;&lt;A href="https://dbricks.co/3kCItuU" target="_blank" rel="noopener"&gt; Azure Databricks&lt;/A&gt; enables organizations to spot new trends, respond to unexpected challenges and predict new opportunities.&amp;nbsp;&lt;FONT style="background-color: #ffffff;"&gt;Data teams are using Delta Lake to &lt;A href="https://dbricks.co/3pB3VUP" target="_blank" rel="noopener"&gt;accelerate ETL pipelines&lt;/A&gt; and MLflow to establish a &lt;A href="https://dbricks.co/32RbkG2" target="_blank" rel="noopener"&gt;consistent ML lifecycle&lt;/A&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Solving the complexity of ML frameworks, libraries and packages&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;Customers frequently struggle to manage all of the libraries and frameworks for machine learning on a single laptop or workstation. There are so many libraries and frameworks to keep in sync (H2O, PyTorch, scikit-learn, MLlib). In addition, you often need to bring in other Python packages, such as Pandas, Matplotlib, numpy and many others. Mixing and matching versions and dependencies between these libraries can be incredibly challenging.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Databricks-runtime-for-ML.png" style="width: 512px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233822iE865E26B69A99775/image-size/large?v=v2&amp;amp;px=999" role="button" title="Databricks-runtime-for-ML.png" alt="Databricks-runtime-for-ML.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;FONT style="background-color: #ffffff;"&gt;Figure 1.&amp;nbsp;Databricks Runtime for ML enables ready-to-use clusters with built-in ML Frameworks&lt;/FONT&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;With Azure Databricks, these frameworks and libraries are packaged so that you can select the versions you need as a single dropdown. We call this the Databricks Runtime. Within this runtime, we also have a specialized runtime for machine learning which we call the &lt;A href="https://dbricks.co/36W25Wr" target="_blank" rel="noopener"&gt;Databricks Runtime for Machine Learning&lt;/A&gt; (ML Runtime). All these packages are pre-configured and installed so you don’t have to worry about how to combine them all together. Azure Databricks updates these every 6-8 weeks, so you can simply choose a version and get started right away.&lt;BR /&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Establishing a consistent ML lifecycle with MLflow&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;The goal of machine learning is to optimize a metric such as forecast accuracy. Machine learning algorithms are run on training data to produce models. These models can be used to make predictions as new data arrive. The quality of each model depends on the &lt;A href="https://dbricks.co/2UzmHO8" target="_blank" rel="noopener"&gt;input data and tuning parameters&lt;/A&gt;. Creating an accurate model is an &lt;A href="https://dbricks.co/2Kfq5vS" target="_blank" rel="noopener"&gt;iterative process&lt;/A&gt; of experiments with various libraries, algorithms, data sets and models. The MLflow open source project started about two years ago to manage each phase of the model management lifecycle, from input through hyperparameter tuning. &lt;A href="https://dbricks.co/2K5UQmK" target="_blank" rel="noopener"&gt;MLflow recently joined the Linux Foundation&lt;/A&gt;. Community support has been tremendous, with 250 contributors, including large companies. In June, MLflow surpassed 2.5 million monthly downloads.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MLflow-unifies-data-scientists-and-engineers.png" style="width: 512px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233825i7021AD73B8DCEE19/image-size/large?v=v2&amp;amp;px=999" role="button" title="MLflow-unifies-data-scientists-and-engineers.png" alt="MLflow-unifies-data-scientists-and-engineers.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;EM&gt;&lt;FONT style="background-color: #ffffff;"&gt;Diagram: MLflow unifies data scientists and data engineers&lt;/FONT&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Ease of infrastructure management&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;Data scientists want to focus on their models, not infrastructure. You don’t have to manage dependencies and versions. It scales to meet your needs. As your data science team begins to process bigger data sets, you don’t have to do capacity planning or requisition/acquire more hardware. With Azure Databricks, it’s easy to onboard new team members and grant them access to the data, tools, frameworks, libraries and clusters they need.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Alignment Healthcare&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;A href="https://dbricks.co/36K6FXB" target="_blank" rel="noopener"&gt;Alignment Healthcare&lt;/A&gt;, a rapidly growing Medicare insurance provider, serves one of the most at-risk groups of the COVID-19 crisis—seniors. While many health plans rely on outdated information and siloed data systems, Alignment processes a wide variety and large volume of near real-time data into a unified architecture to build a revolutionary digital patient ID and comprehensive patient profile by leveraging Azure Databricks. This architecture powers more than 100 AI models designed to effectively manage the health of large populations, engage consumers, and identify vulnerable individuals needing personalized attention—with a goal of improving members’ well-being and saving lives.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Building your first machine learning model with Azure Databricks&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;To help you get a feel for Azure Databricks, follow the code samples and videos in &lt;A href="https://dbricks.co/38Sjz8v" target="_blank" rel="noopener"&gt;this blog post&lt;/A&gt; to build a simple model using sample data in Azure Databricks. Learn how to by attending an &lt;A href="https://dbricks.co/2K93eBX" target="_blank" rel="noopener"&gt;Azure Databricks event&lt;/A&gt;, watch how you can &lt;A href="https://dbricks.co/3nxfzP4" target="_blank" rel="noopener"&gt;Turbocharge your business with Machine Learning&lt;/A&gt;, leverage this &lt;A href="https://dbricks.co/3nuBRAJ" target="_blank" rel="noopener"&gt;free Azure Databricks ML training module on MS Learn&lt;/A&gt; and join us at our next &lt;A href="https://dbricks.co/3kB1Qog" target="_blank" rel="noopener"&gt;Azure Databricks Office Hours&lt;/A&gt;.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Tue, 17 Nov 2020 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-operationalize-more-than-100-ai-models-in-as-little-as-12/ba-p/1892062</guid>
      <dc:creator>ClintonWFord-Databricks</dc:creator>
      <dc:date>2020-11-17T14:00:00Z</dc:date>
    </item>
    <item>
      <title>November 2020 – Conversational AI update</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/november-2020-conversational-ai-update/ba-p/1892528</link>
      <description>&lt;P&gt;We are excited to announce the November release of the Bot Framework SDK and Composer, driving the Microsoft Conversational AI platform forward and building on the announcements we made in September at Microsoft Ignite. Our November update sees new updates to the Bot Framework SDK and Bot Framework Composer, adding new capabilities for developers and improving integration with our key partners, including &lt;A href="http://powerva.microsoft.com/" target="_self"&gt;Power Virtual Agents&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/healthbot/" target="_self"&gt;HealthBot&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework v4.11&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/what-is-new?view=azure-bot-service-4.0" target="_self"&gt;Version 4.11 of the Bot Framework SDK&lt;/A&gt;, including new releases for .NET, JavaScript, Python and Java (preview 7), along with updates to our tooling, including the CLI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Following our quality focused 4.10 release, we continue to push on this area, including improvements to the commonly used typing and transcript logging middleware behavior and associated error handling.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For developers building solutions for Microsoft Teams, new support for meetings has been added, including the Meeting Participant API and meeting specific notifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We continue to reduce developer friction for Skill development, adding the ability to test Skills locally, using the Bot Framework Emulator, without requiring an App Id and password. Additional scenarios, such as interruption support when calling a Skill and the ability to update or delete activities from a Skill have also been added.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Skills support has now also been added to &lt;A href="https://docs.microsoft.com/en-us/healthbot/" target="_self"&gt;HealthBot&lt;/A&gt;, a cloud platform for virtual health bots and assistants built on Bot Framework, with solutions now able consume, or for themselves to be consumed as a Bot Framework Skill.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’re also undertaking significant investments in automated testing in this area, with the opportunity for you to review and provide feedback on the current specifications for &lt;A href="https://github.com/microsoft/botframework-sdk/blob/main/specs/testing/skills/SkillsFunctionalTesting.md" target="_self"&gt;Functional Testing&lt;/A&gt; and the &lt;A href="https://github.com/microsoft/BotFramework-FunctionalTests/blob/main/specs/TransciptTestRunner.md" target="_self"&gt;Test Runner&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Further improvements to our documentation include expanding content across Adaptive Dialogs, Skills, overall architecture topics, as well as adding &lt;A href="https://docs.microsoft.com/en-us/java/api/?term=microsoft.bot.builder" target="_self"&gt;reference documentation for the Java SDK preview&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework Composer v1.2&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A &lt;A href="https://docs.microsoft.com/en-us/composer/what-is-new" target="_self"&gt;new release of Composer (v1.2)&lt;/A&gt; is now available. This release deepens integration with Power Virtual Agents (PVA), part of the Power Platform, with a new &lt;A href="https://powervirtualagents.microsoft.com/en-us/blog/power-virtual-agents-integration-with-bot-framework-composer-is-available-in-public-preview/" target="_self"&gt;public preview of PVA integration with Bot Framework Composer&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Blog_PVA_Composer_HD.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233926iA4B430BED5ED451D/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog_PVA_Composer_HD.gif" alt="Blog_PVA_Composer_HD.gif" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;Users of the no-code PVA platform were already able to extend their solutions by consuming Bot Framework Skills. Now, PVA solutions can be opened in Bot Framework Composer, using a deep-link from the PVA portal, extending them with more sophisticated capabilities and enabling the collaboration between business users and developers on the same project.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://powervirtualagents.microsoft.com/en-us/blog/power-virtual-agents-integration-with-bot-framework-composer-is-available-in-public-preview/" target="_self"&gt;Try the new Power Virtual Agents integration with Bot Framework Composer today!&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When ready, Composer developers can publish directly from Composer, using a pre-configured publishing profile, back into the PVA portal, with new PVA Topics added using Composer then shown alongside existing Topics and immediately ready for testing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An upcoming release of Composer, expected in December, will add improved provisioning and publishing support and enhanced QnA Maker knowledgebase integration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of the December release, users will also have the option to enable new preview capabilities through the addition of feature flags.&amp;nbsp; The first preview features planned include &lt;A href="https://aka.ms/bf-orchestrator" target="_self"&gt;Orchestrator&lt;/A&gt; integration, the new intent detection and arbitration (dispatch) technology that runs locally within your bot, along with Form Dialogs, enabling the rapid generation of intelligent slot-filling dialogs, including complex capabilities such as slot disambiguation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Nightly builds of Composer are available (enabled via the Composer settings page) which allow you to try the latest updates as soon as they are available.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Nov 2020 21:21:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/november-2020-conversational-ai-update/ba-p/1892528</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-11-16T21:21:35Z</dc:date>
    </item>
    <item>
      <title>Re: Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1879387#M110</link>
      <description>&lt;P&gt;Congrats Team. Loved the deployment model without compromising data residency requirements.&lt;/P&gt;</description>
      <pubDate>Thu, 12 Nov 2020 07:19:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/bc-p/1879387#M110</guid>
      <dc:creator>vikasgoyal</dc:creator>
      <dc:date>2020-11-12T07:19:18Z</dc:date>
    </item>
    <item>
      <title>Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575</link>
      <description>&lt;P&gt;QnA Maker is an Azure Cognitive Service that allows you to create a conversational layer over your data- in minutes. Today, we are announcing a new version of QnA Maker which advances several core capabilities like better relevance and precise answering, by introducing state-of-art deep learning technologies.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604338472497.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233098iCD0584EEFD441E53/image-size/large?v=v2&amp;amp;px=999" role="button" title="nerajput_1-1604338472497.png" alt="Illustrative representation of QnA Maker functionality." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Illustrative representation of QnA Maker functionality.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Overview of new QnA Maker managed capabilities&lt;/H1&gt;
&lt;P&gt;Summary of new features introduced:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deep learnt ranker with enhanced relevance of results across all &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support" target="_blank" rel="noopener"&gt;supported languages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Precise phrase/short answer extraction from answer passages.&lt;/LI&gt;
&lt;LI&gt;Simplified resource management by reducing the number of resources deployed.&lt;/LI&gt;
&lt;LI&gt;E2E region support for Authoring + Prediction.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Detailed description of the new features is further down in this article. Learn how to migrate to the new QnA Maker managed (Preview) knowledge base &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/tutorials/migrate-knowledge-base" target="_blank" rel="noopener"&gt;here.&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;QnA Maker managed (Preview) Architecture.&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;As per the architecture of QnA Maker managed (Preview), there will be only two resources: QnA Maker service for authoring and computation and Azure Cognitive Search for storage and L1 ranking. This has been done with an aim of simplifying the resource creation and management process. Now, customers need to manage only 2 resources instead of 5 different resources. &amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;QnA Maker managed (Preview) also allows the user to do language setting specific to Knowledge Base.&lt;/LI&gt;
&lt;LI&gt;Computation has been moved out of the user subscription, so there is no dependency on the customers for scaling and availability management. This allowed us to use SOTA deep learnt model for L2 ranker which enhances the L2 ranker horizontally across all the languages, so now we support all the 50+ languages with better and enhanced precision.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;QnA Maker service will be available in multiple regions to give customers’ the flexibility to keep their end-to-end service in one region.&lt;/LI&gt;
&lt;LI&gt;For inference logs and telemetry, the latest version will be using Azure Monitoring instead of App insights. To keep the experience seamless and easy to adopt all the APIs has been kept backward compatible. There is almost zero change in the management portal experience.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604339027855.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230873i294601F3DC0BA357/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_1-1604339027855.png" alt="nerajput_1-1604339027855.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;New features of QnA Maker managed (Preview)&lt;/H1&gt;
&lt;P&gt;This section talks about all the distinguishing features of QnA maker managed in detail.&lt;/P&gt;
&lt;H2&gt;Simplified Create Blade&lt;/H2&gt;
&lt;P&gt;Onboarding on QnA Maker managed (Preview), and resource creation has been kept quite simple. Now, you will see a checkbox with &lt;STRONG&gt;Managed&lt;/STRONG&gt;, as shown below. As soon as you select the checkbox, the form will be updated with the required resources.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604339332630.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230881iD29DB949BC8E7638/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_1-1604339332630.png" alt="nerajput_1-1604339332630.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Precise Answering&lt;/H2&gt;
&lt;P&gt;Machine Reading Comprehension based answer span detection feature is most beneficial for the scenarios where the customers have big passages present as answer in their Knowledge Base. Currently, they put good amount of manual efforts in curating small/precise answers and ingest them in the Knowledge base.&lt;/P&gt;
&lt;P&gt;The new features give them flexibility to either choose the precise answer or the answer passage, customers can take this decision based on the confidence score of the precise short answer and answer passage. Here are some examples to show how short answers can be useful:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_0-1604339230181.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230880iD3FFAB4B253833CA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_0-1604339230181.png" alt="nerajput_0-1604339230181.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Deep Learnt ranker&lt;/H2&gt;
&lt;P&gt;The new L2 ranker is based on &lt;A href="https://www.microsoft.com/en-us/research/blog/microsoft-turing-universal-language-representation-model-t-ulrv2-tops-xtreme-leaderboard/" target="_self"&gt;Turing multilingual language model (T-ULRv2)&lt;/A&gt;, a deep learning-based transformer model, which improves the precision of the service for all the languages.&amp;nbsp;For any user query, the new L2 ranker model understands the semantics of the user query better and gives better aligned results. This model is not language specific and is targeted to improve the overall precision of all languages horizontally. Here are some examples to analyze the difference between the results of current service and QnA Maker managed (Preview) service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="671"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Current GA results &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;QnA Maker managed (Preview) results &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; Improvements in Preview&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;can someone ring me&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;I can tell you all about Wi-Fi calling, including the devices that support Wi-Fi calling and where you can get more information yourself. Feel free to ask me a question and I'll do what I can to answer it&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;Yes, you can make and receive calls using Wi-Fi calling. Pretty nifty, right?&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;The new L2 ranker can understand the relevance between “ring me” and “make and receive calls” and is returning more relevant result unlike the current GA, which has returned a generic answer.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;can’t connect to mobile data&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;You'll be connected to Wi-Fi, so it'll only use your minutes and text allowances.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;If you don't have mobile signal, it's no problem. With Three inTouch Wi-Fi Calling, you can call and text whenever you're on Wi-Fi in the UK, even without mobile signal.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;The new L2 ranker is again able to understand the query better as its able to understand that mobile data is somewhere connected to mobile signals and hence giving better results based on the data present in the Knowledge Base than the current GA model.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;E2E region support&lt;/H2&gt;
&lt;P&gt;With QnA Maker managed (Preview) our management service is no more limited to west-US region. We are offering end to end region support for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;South Central US&lt;/LI&gt;
&lt;LI&gt;North Europe&lt;/LI&gt;
&lt;LI&gt;Australia East.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other hero regions will be added when we go GA.&lt;/P&gt;
&lt;H2&gt;Knowledge Base specific language setting&lt;/H2&gt;
&lt;P&gt;Now, customers can create Knowledge bases with different language setting within a service. This feature is beneficial for users who have multi-language scenarios and need to power the service for more than one language. In this case, there will be a test index specific to every Knowledge Base, so that the customer can verify how the service is performing specific to every language.&lt;/P&gt;
&lt;P&gt;You can configure this setting only with the first Knowledge base of the service, once set the user will not be allowed to update the setting.&lt;/P&gt;
&lt;H2&gt;Pricing&lt;/H2&gt;
&lt;P&gt;Public preview of QnA Maker managed (Preview) will be free in all the regions (You only pay for the Azure Cognitive Search SKU). The standard pricing will be applicable when the service goes to GA by mid-2021.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcognitive-services%2Fqnamaker%2Fhow-to%2Fset-up-qnamaker-service-azure&amp;amp;data=04%7C01%7CNeha.Rajput%40microsoft.com%7C1c1572c23454483ee7d308d87c580ec8%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637396064955343068%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=X4h6Z2rWsCewb17gHyPPHZDYqMSX3bYlXkD7pZDP9%2Bo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Create your QnA Maker managed (Preview) service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcognitive-services%2Fqnamaker%2Ftutorials%2Fmigrate-knowledge-base&amp;amp;data=04%7C01%7CNeha.Rajput%40microsoft.com%7C1c1572c23454483ee7d308d87c580ec8%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637396064955343068%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=NbpwblEPLnDhTGRD9tarqPdHvH7FmWISwkF8hfvsPFA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Migrate your knowledge base to the new Preview.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=h1wwjBpSeZ4" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/h1wwjBpSeZ4/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Nov 2020 05:55:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2020-11-19T05:55:03Z</dc:date>
    </item>
    <item>
      <title>Azure speaks your language: the 3 immediate benefits for your organization</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-speaks-your-language-the-3-immediate-benefits-for-your/ba-p/1853544</link>
      <description>&lt;P class="hp hq fx hr b hs ht hu hv hw hx hy hz ia ib ic id ie if ig ih ii cx dv" data-selectable-paragraph=""&gt;The last several years brought exciting innovations in the field of Artificial Intelligence, especia&lt;SPAN&gt;l&lt;/SPAN&gt;ly when it comes to advancements in speech and language processing. Processing speech and making text and audio information searchable enables a diverse set of innovative applications, including helping researchers in searching for related papers, or building information graphs for predicting the best new drug candidates, or uncovering issues with products and services in near real time. For region like Central and Eastern Europe, which includes 30+ countries, most speaking their own language, support for local languages is a critical condition for implementing innovation. That’s why the recent (September 2020) Azure Speech services update has opened a whole new area of opportunity for our region.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ht hu hv hw hx hy hz ia ib ic id ie if ig ih ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;With updated language support,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;most of the EU languages are now supported in Azure Speech services&lt;/STRONG&gt;. For region which I am covering in my current role, it means that we now have support for all of our CEE EU languages&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;(Polish, Bulgarian, Czech, Greek, Croatian, Hungarian, Romanian, Slovak, Slovenian, Estonian, Lithuanian, Latvian, Maltese)&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;Russian&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;in Azure Speech and Translator services. Additionally, our speech generation models have also been updated, now leveraging the Neural TTS - a powerful speech synthesis capability, which enables to convert text to lifelike speech which is close to human-parity. Below you will find&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;3 benefits, how this might help you advance your products and services today&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="health.jpeg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231462iF4E4CD2A64FAE41D/image-size/large?v=v2&amp;amp;px=999" role="button" title="health.jpeg" alt="health.jpeg" /&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Automatic generation of medical summary from spoken conversations between doctors and patients&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;First&lt;/STRONG&gt;, analyzing speech data or generating speech enables you to extract insights from audio or video information, which otherwise would be unreachable for analytical systems. This might include data like customer support conversations or employee speech in videos or transcribing speech for field employees or doctors. Communicating with your customers with natural-sounding generated speech in your own language is another area of innovation, which enables scenarios from voice announcements to supporting people with visual impairments to building voice assistants. Is information the new currency? If you answer “yes” to this — why then would you have terabytes of currency sitting without you getting use of it? Now you can turn it into tangible cash-flow.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv lia-indent-padding-left-30px" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv lia-indent-padding-left-30px" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;&lt;EM class="jd"&gt;Azure Speech&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM class="jd"&gt;services are a sub-set of pre-built (but customizable) APIs for working with Speech. This includes transcribing spoken language into text for further analysis (Speech-to-Text) and generating naturally sounding speech form text input (Text-to-Speech). Azure Translator is another piece in the puzzle, which has also received major update for the languages, now translating text between 70+ languages.&lt;/EM&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;Second&lt;/STRONG&gt;, there are new scenarios enabled now by these pre-built AI models. Do you have that innovative idea for analysing customer conversations or augmenting your service with spoken messages in your local language? Often, these ideas were not realized due to the associated challenges like finding the right skilled people within your organization and investing into a project with unknown development cycle and returns. Now it is possible to build a realistic prototype app quickly to extract insights from your speech data, by calling the service through the API — in days, if not hours.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="CLO18_headset_003.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231466i2ABB926C77241685/image-size/large?v=v2&amp;amp;px=999" role="button" title="CLO18_headset_003.jpg" alt="CLO18_headset_003.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;FONT size="2"&gt;Analysing customer support conversations brings insights from priceless data, which is untapped without applying Speech processing&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;Third&lt;/STRONG&gt;, this is one of those cloud services, which may work without sending your data to the cloud! Many of Azure Cognitive Services today may be deployed right within your own data center as containers. This means, that none of the actual data will be sent to the cloud, as even processing will happen locally. In this case, only billing information will be exchanged with Azure.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;Interested enough to give it a try? If you are interested in learning more, you may&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A class="bo js" href="https://azure.microsoft.com/en-us/overview/sales-number/?wt.mc_id=AID3025025_QSG_BLOG_488906" target="_blank" rel="noopener nofollow"&gt;request detailed information or virtual session on Azure Cognitive Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from our sales representatives (please specify whether you are looking for the session on Azure Cognitive services, or details of your specific projects where Speech services may be used). To read more or test Azure Speech services capabilities in your language, please refer to our&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A class="bo js" href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-services/?wt.mc_id=AID3025025_QSG_BLOG_488907" target="_blank" rel="noopener nofollow"&gt;Azure Speech Services Documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;Looking forward to the exciting results you will achieve in your business with the updated Azure Speech Services!&lt;/P&gt;</description>
      <pubDate>Wed, 04 Nov 2020 15:48:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-speaks-your-language-the-3-immediate-benefits-for-your/ba-p/1853544</guid>
      <dc:creator>dturchyn</dc:creator>
      <dc:date>2020-11-04T15:48:28Z</dc:date>
    </item>
    <item>
      <title>Re: Using GitHub Actions &amp; Azure Machine Learning for MLOps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/using-github-actions-amp-azure-machine-learning-for-mlops/bc-p/1853086#M106</link>
      <description>&lt;P&gt;Hi David&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am trying to follow this article but getting below error in github actions:&lt;/P&gt;
&lt;DIV class="application-main d-flex flex-auto flex-column" data-commit-hovercards-enabled="" data-discussion-hovercards-enabled="" data-issue-and-pr-hovercards-enabled=""&gt;
&lt;DIV class="d-flex flex-auto overflow-hidden"&gt;
&lt;DIV class="container-xl clearfix new-discussion-timeline d-flex flex-column flex-auto p-0"&gt;
&lt;DIV class="repository-content d-flex flex-auto"&gt;
&lt;DIV class="d-flex flex-column width-full"&gt;
&lt;DIV class="d-flex flex-column width-full flex-auto"&gt;
&lt;DIV class="d-flex flex-items-stretch flex-auto"&gt;
&lt;DIV class="d-flex flex-items-stretch flex-auto overflow-x-auto"&gt;
&lt;DIV class="px-4 pt-2 border-top border-left d-flex flex-auto flex-column overflow-x-hidden"&gt;
&lt;DIV class="js-updatable-content js-socket-channel mb-4" data-url="/shikha1970/mlops/actions/runs/345168332/workflow_run" data-channel="eyJjIjoiY2hlY2tfc3VpdGVzOjE0NDgxNjA2MjMiLCJ0IjoxNjA0NDk1Mzc2fQ==--a3530e5e443b78dbc155ef0512a52afffcc92e602e3ebe7825a4d0133fdf4e23"&gt;
&lt;DIV class="Details js-details-container Box position-relative check-annotation check-annotation-failure my-2"&gt;
&lt;DIV class="py-2 pl-4 ml-3 overflow-scroll"&gt;&lt;CODE&gt;&lt;/CODE&gt;
&lt;PRE&gt;Microsoft REST Authentication Error: Get Token request returned http error: &lt;BR /&gt;400 and server response: ***"error":"unauthorized_client",&lt;BR /&gt;"error_description":"AADSTS700016: Application with identifier '***' was not found in the directory '***'. &lt;BR /&gt;This can happen if the application has not been installed by the administrator of the tenant or &lt;BR /&gt;consented to by any user in the tenant. &lt;BR /&gt;You may have sent your authentication request to the wrong tenant.\r\n&lt;BR /&gt;Trace ID: a4df3138-08b0-422c-88bb-5dd4eaa3c900\r\nCorrelation ID: f79fe3ff-a262-432c-ad93-ba189c0f88ba\r\nTimestamp: 2020-11-04 08:34:23Z",&lt;BR /&gt;"error_codes":[700016],"timestamp":"2020-11-04 08:34:23Z","trace_id":"a4df3138-08b0-422c-88bb-5dd4eaa3c900",&lt;BR /&gt;"correlation_id":"f79fe3ff-a262-432c-ad93-ba189c0f88ba","error_uri":"https://login.microsoftonline.com/error?code=700016"***&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Wed, 04 Nov 2020 13:12:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/using-github-actions-amp-azure-machine-learning-for-mlops/bc-p/1853086#M106</guid>
      <dc:creator>shikhaagrawal</dc:creator>
      <dc:date>2020-11-04T13:12:46Z</dc:date>
    </item>
    <item>
      <title>Azure Neural TTS upgraded with HiFiNet, achieving higher audio fidelity and faster synthesis speed</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860</link>
      <description>&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post was co-authored with Jinzhu Li and Sheng Zhao&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&amp;nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;close to human-parity&lt;/A&gt;. Since its launch, we have seen it widely adopted in a variety of scenarios by many Azure customers, from voice assistants like the customer service bot like &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Poste Italiane&lt;/SPAN&gt;&lt;/A&gt;, to audio content creation scenarios like &lt;A href="https://youtu.be/m-3-D7S0piw?t=668" target="_blank" rel="noopener"&gt;Duolingo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Voice quality, which includes the accuracy of pronunciation, the naturalness of prosody such as intonation and stress patterns, and &lt;EM&gt;the fidelity of audio&lt;/EM&gt;, is the key reason that customers are migrating from the traditional TTS voices to neural voices. Today we are glad to share that we have upgraded our Neural TTS voices with a new-generation vocoder, called &lt;EM&gt;HiFiNet&lt;/EM&gt;, which results much higher audio fidelity while significantly improving the synthesis speed. This is particularly beneficial to customers whose scenario relies on hi-fi audios or long interactions, including video dubbing, audio books, or online education materials. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What’s new?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our recent updates on Azure Neural TTS voices include a major upgrading of the vocoder. The voice fidelity has been improved significantly and audio quality defects such as glitches and small noises are largely reduced. Our tests show that this new vocoder generates audios without hearable quality loss from the recordings of training data (more details are introduced later). In addition, it can synthesize speech much faster than our previous version of the product. All these benefits are achieved through a new-generation neural vocoder, called &lt;EM&gt;HiFiNet&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What is a vocoder and why does it matter?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Vocoder is a major component in speech synthesis, or text-to-speech. It turns an intermediate form of the audio, which is called acoustic feature, into audible waveform. Neural vocoder is a specific vocoder design which uses deep learning networks and is a critical module of Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft &lt;SPAN&gt;Azure &lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;Neural TTS &lt;/A&gt;&amp;nbsp;consists of three major components in the engine: Text Analyzer, Neural Acoustic Model, and Neural Vocoder. To generate natural synthetic speech from text, first, text is input into &lt;EM&gt;Text Analyzer&lt;/EM&gt;, which provides output in the form of phoneme sequence. A phoneme is a basic unit of sound that distinguishes one word from another in a particular language. Sequence of phonemes defines the pronunciations of the words provided in the text. Then the phoneme sequence goes into the &lt;EM&gt;Neural Acoustic Model&lt;/EM&gt; to predict acoustic features, which defines speech signals, such as speaking style, speed, intonations, and stress patterns, etc. Finally, the &lt;EM&gt;Neural Vocoder&lt;/EM&gt; converts the acoustic features into audible waves so the synthetic speech is generated.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The vocoder is critical to the final audio quality. In specific, it directly impacts the fidelity of a wave, including clearness, timbre, etc. Let’s hear the difference of the audio quality with samples generated using different neural vocoders based on the same acoustic features (recommended to &lt;STRONG&gt;listen with a high-quality headset&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Vocoder versions&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;
&lt;P&gt;2018 vocoder for real-time synthesis&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;
&lt;P&gt;2019 vocoder for real-time synthesis&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180px"&gt;
&lt;P&gt;2020 vocoder for real-time synthesis (HiFiNet)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;&lt;EM&gt;“&lt;/EM&gt;&lt;EM&gt;Top cinematographers weigh in on filmmaking in the age of streaming.”&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2018-vocoder.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="168px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2019-vocoder-new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="180px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2020-vocoder-new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With each vocoder update, the speech generated sounds clearer, voice less muffled and noises reduced. &amp;nbsp;In the next section, we introduce how a &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder is trained during the creation of a neural voice model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;How does HiFiNet work?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Azure TTS system, neural voice models are trained using human voice recordings as training data with deep learning networks. As part of the training, a vocoder is built with the goal to generate high quality audio output close to the original recordings from the training data. In the meantime, it needs to run fast enough to produce at least 24,000 samples per seconds, i.e. with a sampling rate of 24khz, which is the default sampling rate of Azure Neural TTS voice models.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Leveraging the state-of-art research on vocoders, we designed the training pipeline for &lt;EM&gt;HiFiNet&lt;/EM&gt;, the new-generation Neural TTS vocoder, and applied it to create neural voice models in Azure Neural TTS. This pipeline is built with one simple goal: produce machine-generated audio waves (synthetic speech) that is indistinguishable from its original waves (human recordings) in a high speed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below chart describes how the &lt;EM&gt;HiFiNet&lt;/EM&gt; training pipeline works. With this pipeline, two key networks are trained: &amp;nbsp;A &lt;EM&gt;Generator&lt;/EM&gt; which is used to create audio (‘Generated Wave’), and a &lt;EM&gt;Discriminator &lt;/EM&gt;which is used to identify the gap of the created audio from its training data (‘Real Wave’). The goal of the training is to make the &lt;EM&gt;Generator&lt;/EM&gt; generate waves that the &lt;EM&gt;Discriminator&lt;/EM&gt; can’t distinguish from the original real recordings.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Vocoder-Training.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231194iEF2762BE612324B0/image-size/large?v=v2&amp;amp;px=999" role="button" title="Vocoder-Training.png" alt="Training pipeline of the HiFiNet Vocoder" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Training pipeline of the HiFiNet Vocoder&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;First, the training pipeline uses the original human recording as input and extract the acoustic features. Then, the acoustic features are fed into the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; module which generates waves, so we get two sets of waves: the original recordings as real waves, and the generated waves as fake waves. Next, the two sets of waves are fed into the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; network to distinguish which are the real waves and which are the generated fake waves. This output from the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; is used as feedback to help the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; and &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; to learn better. As this training loop continues, the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; becomes smarter to create indistinguishable fake waves, while the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; gets smarter in making the right judgements. Finally, when the training reaches a point where &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; can’t distinguish the waves generated by the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator &lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt;from real waves, the vocoder is successfully trained. This vocoder is capable of producing audio outputs without noticeable quality loss compared to the original human recordings.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the next section we describe the performance of &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What are the benefits?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet significantly improves audio quality.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To understand the benefit of &lt;EM&gt;HiFiNet&lt;/EM&gt;, we conducted a number of tests in many aspects which yielded positive results. Our tests show that the &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder significantly improves the audio quality of the Neural TTS voice output, compared to our previous version of the product.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;CMOS (Comparative Mean Opinion Score) is a well accepted method in the speech industry for comparing the voice quality of two TTS systems. A CMOS test is similar to an A/B testing, where participants listen to different pairs of audio samples generated by two systems and provide their subjective opinions on how A compares to B. Normally in one test, we recruit 30-60 anonymous testers with qualified language expertise to evaluate around 50 pairs of audio samples side by side. The result is reported as &lt;EM&gt;CMOS gap&lt;/EM&gt;, which measures the average of the difference in the opinion score between the two systems. In the cases where the absolute value of a CMOS gap is &amp;lt;0.1, we claim system A and B are on par. When the absolute value of a CMOS gap is &amp;gt;=0.1, then one system is reported better than the other. If the absolute value of a CMOS gap is &amp;gt;=0.2, we say one system is significantly better than the other. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have done hundreds of CMOS tests of &lt;EM&gt;HiFiNet&lt;/EM&gt; compared to our last version vocoder, on 68 neural voices across 49 languages/locales. Our results show that &lt;EM&gt;HiFiNet&lt;/EM&gt; is notably better than the previous production vocoder in Azure Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In general, the audio quality, especially the fidelity is obviously improved. On average, across all languages, the &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder achieves a CMOS gain higher than 0.2 compared to the previous vocoder, which means the improvement is hearable for users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In particular, &lt;EM&gt;HiFiNet&lt;/EM&gt; also has better robustness than the previous version of vocoder. Audio defects are largely reduced in the generated waves with &lt;EM&gt;HiFiNet&lt;/EM&gt;. Our tests show that with the previous production vocoder, in 100 test samples, our testers can hear about 10 defects like beep, click sound, fidelity loss. Although most of them are not obvious, it can still be annoying if it keeps happening in a long audio or multi-round voice interactions. Now, these defects are no longer reported with the &lt;EM&gt;HiFiNet&lt;/EM&gt; audios, under the same test procedure with the same test sets.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these advantages, we have updated the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;Neural TTS voices&lt;/A&gt; on Azure Cognitive Services with the new vocoder. Listen to the samples below to hear the difference. &amp;nbsp;Or test the new voices using your own text with our &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;online demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;
&lt;P&gt;Previous vocoder&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="138"&gt;
&lt;P&gt;HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174" scope="col" style="width: 200px;"&gt;
&lt;P&gt;HiFiNet CMOS gain&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder24k-Cheerful-00018.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN24k-Cheerful-00018.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.122 (Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;German&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00022-oldVocoder24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00022-HiFiNet24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.193 (Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Chinese (Mandarin, Simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder24k-News-00005.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN24k_News-00005.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.348 (Obviously Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Japanese&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder-LongSentence-00032.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN-LongSentence-00032.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.465 (Obviously Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet reaches human-parity audio fidelity.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, we have conducted tests to compare the human recording audio quality and the computer-generated audio quality with &lt;EM&gt;HiFiNet&lt;/EM&gt;. To make the comparison more accurate and more focused on the vocoder itself, we use the acoustic features extracted directly from human recordings instead of the TTS-predicted acoustic features so the acoustic differences are controlled and only the vocoder is evaluated in CMOS tests. Participants are asked to give their scores for different pairs of the generated waves and human recordings. Our result shows the CMOS gap of the audios produced by &lt;EM&gt;HiFiNet&lt;/EM&gt; compared to human recordings is -0.05, which means the difference is hardly hearable and the audio quality is on par.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear how close the &lt;EM&gt;HiFiNet&lt;/EM&gt; audio fidelity is to the human recordings with the samples below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;
&lt;P&gt;Human recording&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="138"&gt;
&lt;P&gt;HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174" scope="col" style="width: 200px;"&gt;
&lt;P&gt;HiFiNet CMOS gap&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.recording-GeneralSentence-0000000365.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiNet-GeneralSentence-0000000365.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.045 (on par)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Chinese (Mandarin, Simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.recording-GeneralSentence-0001000011.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiNet-GeneralSentence-0001000011.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;-0.054 (on par)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet generates audios faster.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Real Time Factor (RTF) is used to measure the performance of vocoder. It is calculated as the time duration needed to generate the audio divided by the audio duration. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;HiFiNet&lt;/EM&gt; is a parallel vocoder so it can generate multiple samples at the same time. Here are some measurements of &lt;EM&gt;HiFiNet&lt;/EM&gt; performance on both GPU and CPU devices. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the output of 24khz sampling rate, on M60 GPU, through carefully optimized &lt;A href="https://developer.nvidia.com/cuda-zone" target="_blank" rel="noopener"&gt;CUDA&lt;/A&gt; implementation, the vocoder RTF is around 0.01, which means the &lt;EM&gt;HiFiNet&lt;/EM&gt; system can generate an audio 10 second-long in 0.1 second. This speed is almost 3x of our previous production vocoder.&lt;/P&gt;
&lt;P&gt;On CPU machines, thanks to the highly-optimized &lt;A href="https://onnx.ai/" target="_blank" rel="noopener"&gt;ONNX&lt;/A&gt; runtime, the vocoder RTF is around 0.02 for 24khz sampling rate output.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the performance improvement of &lt;EM&gt;HiFiNet&lt;/EM&gt;, the end-to-end synthesis speed is about 2X as fast as our previous Neural TTS engine, which the audio quality is significantly improved at the same time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What to expect next&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Currently we support up to 24khz sampling rate on Azure Neural TTS service with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;68 neural voice models&lt;/A&gt; available. In some highly sophisticated scenarios like audio dubbing, higher fidelity output like 48khz sampling rate makes a world of difference. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below snippet from an audio spectrum shows the difference between 48hz sampling rate and 24khz. Audios with 48khz sampling rate get a higher frequency responding range which keeps more sophisticated details and nuances of the sound. Such high sampling rate creates challenges on both voice quality and inference speed.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="48khz frequency range.png" style="width: 992px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231195i2148010E6A3DC434/image-size/large?v=v2&amp;amp;px=999" role="button" title="48khz frequency range.png" alt="24khz vs. 48khz: different frequency range" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;24khz vs. 48khz: different frequency range&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In our exploration, &lt;EM&gt;HiFiNet&lt;/EM&gt; can handle both challenges well.&amp;nbsp; According to our experiments, &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder on 48khz sampling rate can be trained to achieve even higher quality with reasonable inference speed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the difference of the audio fidelity between the TTS output in 24khz and 48khs sampling rate, &lt;STRONG&gt;with a hi-fi speaker or headset&lt;/STRONG&gt;. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="245px" height="30px"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="152px" height="30px"&gt;
&lt;P&gt;24khz HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="30px"&gt;
&lt;P&gt;48khz HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD colspan="2"&gt;
&lt;P&gt;English (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.hifinet-LongSentence-00001.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.hifinet_48k-LongSentence-00001.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="245px" height="57px"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="152px" height="57px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00013-hifinet24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="148px" height="57px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00013-hifinet48k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The 48khz vocoder is now in private preview and can be applied to custom voices. &amp;nbsp;Contact mstts [at] microsoft.com for details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create a custom voice with HiFiNet&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The HiFiNet vocoder is also available in the&amp;nbsp;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt;&amp;nbsp;capability, enabling organizations to create a unique brand voice in multiple languages for their unique scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;A href="https://aka.ms/customneural" target="_blank" rel="noopener"&gt;Learn more about the process for getting started with Custom Neural Voice&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get started&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering more natural and intuitive voice experiences for global customers. Text to Speech has more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;70 standard voices in over 40 languages&lt;/A&gt;&amp;nbsp;and locales in addition to our growing list of&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;Neural TTS voices&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our &lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Nov 2020 16:41:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-11-19T16:41:18Z</dc:date>
    </item>
    <item>
      <title>Re: Computer Vision for spatial analysis at the Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/1824927#M102</link>
      <description>&lt;P&gt;Is Tesla T4 GPU a recommendation or a mandatory requirement? Can we use Nvidia Jetson Series (TX2, Xavier NX etc.) as edge device for a single low frame rate video stream?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 27 Oct 2020 21:30:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/bc-p/1824927#M102</guid>
      <dc:creator>hussnain_ahmed</dc:creator>
      <dc:date>2020-10-27T21:30:22Z</dc:date>
    </item>
    <item>
      <title>Re: Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1797871#M100</link>
      <description>&lt;P&gt;&lt;LI-USER uid="832893"&gt;&lt;/LI-USER&gt;, if you are using the REST endpoint, please ensure your request URL uses the most recent (v3.1) version of the API. For example: https://{endpoint}/vision/&lt;STRONG&gt;v3.1&lt;/STRONG&gt;/describe[?maxCandidates][&amp;amp;language]. If you are using the client library, please update to the latest version of the client library as well.&lt;/P&gt;</description>
      <pubDate>Tue, 20 Oct 2020 02:09:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1797871#M100</guid>
      <dc:creator>boxinli</dc:creator>
      <dc:date>2020-10-20T02:09:36Z</dc:date>
    </item>
    <item>
      <title>Re: Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1791073#M99</link>
      <description>&lt;P&gt;These are really amazing! I hope the marketing team make these new features and technologies known to people, because they deserve more attention.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Oct 2020 21:44:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1791073#M99</guid>
      <dc:creator>HotCakeX</dc:creator>
      <dc:date>2020-10-16T21:44:59Z</dc:date>
    </item>
    <item>
      <title>Re: Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1781873#M98</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Just wanted to clarify, are these new captioning abilities built into the existing APIs/REST Endpoints? (ie if we are already pulling out the captioning text, we should see the improvements?)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;cheers&lt;/P&gt;</description>
      <pubDate>Thu, 15 Oct 2020 00:47:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/bc-p/1781873#M98</guid>
      <dc:creator>WahYuen</dc:creator>
      <dc:date>2020-10-15T00:47:13Z</dc:date>
    </item>
    <item>
      <title>Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/ba-p/1667146</link>
      <description>&lt;P&gt;&lt;SPAN&gt;How would you&amp;nbsp;leverage&amp;nbsp;technology&amp;nbsp;capable of&amp;nbsp;generating&amp;nbsp;natural language&amp;nbsp;image&amp;nbsp;descriptions&amp;nbsp;that are, in many cases,&amp;nbsp;just as good or better than what a human could&amp;nbsp;produce? What if&amp;nbsp;that&amp;nbsp;capability&amp;nbsp;is just one cloud&amp;nbsp;API&amp;nbsp;call away? Would you create live scene captions for people who are blind or low vision to better understand the world around them, like&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/ai/seeing-ai" target="_blank" rel="noopener"&gt;Seeing AI&lt;/A&gt;&lt;SPAN&gt;?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/" target="_blank" rel="noopener"&gt;Azure Cognitive Services&lt;/A&gt;, you can now take advantage of state-of-the-art image captioning that has achieved human parity on captioning benchmarks thanks to advancements in the underlying AI model. Below are some examples showing how the improved model is more accurate than the old one:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/ubpEUksa3v0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="border-style: hidden; width: 100%;" border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="35.22099447513812%" height="314px" style="border-style: hidden; width: 35.22099447513812%; height: 314px;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="press2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/226568iB09A69BDBDFDB6C2/image-size/large?v=v2&amp;amp;px=999" role="button" title="press2.png" alt="press2.png" /&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;TD width="64.77900552486187%" height="314px" style="border-style: hidden; width: 64.77900552486187%;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="press8.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/226569i99C2AD0C3FFE4385/image-size/large?v=v2&amp;amp;px=999" role="button" title="press8.png" alt="press8.png" /&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="35.22099447513812%" height="83px" style="border-style: hidden; width: 35.22099447513812%;"&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Improved model: A trolley on a city street&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Old model: a view of a city street&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="64.77900552486187%" height="83px" style="border-style: hidden; width: 64.77900552486187%;"&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Improved model: A person using a microscope&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Old model: A person sitting at a table using a laptop&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, let us take a closer look at the technology and how to easily harness its power for your users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Behind the Scenes of the Technology&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;The &lt;/SPAN&gt;novel object captioning at scale (&lt;A href="https://nocaps.org/" target="_blank" rel="noopener"&gt;nocaps&lt;/A&gt;) challenge &lt;SPAN&gt;evaluates AI models on their abilities to generate image captions describing new&lt;/SPAN&gt;&lt;SPAN&gt; objects that are not present in their training data. Microsoft’s Azure AI team pioneered the Visual Vocabulary (VIVO) pre-training technique that led to the industry first of &lt;/SPAN&gt;&lt;A href="https://evalai.cloudcv.org/web/challenges/challenge-page/355/leaderboard/1011" target="_blank" rel="noopener"&gt;surpassing human performance on the (nocaps) benchmark&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;Before we learn more about this innovation, we should understand Vision and Language Pre-training (VLP) first. It is a cross-modality (across vision and language) learning technique that uses large-scale image/sentence data pairs to train machine learning models capable of generating natural language captions for images. However, because visual concepts are learned from image/sentence pairs which are costly to obtain, it is difficult to train a broadly useful model with wide visual concept coverage. This is where VIVO pre-training comes in. It improves and extends VLP to allow rich visual concepts to be learned from easier to obtain image/word pairs (instead of sentence) to build a large-scale visual vocabulary. While natural language sentence generation is still trained with limited visual concepts, the resulting image caption is cleverly enriched by new objects from the large-scale visual vocabulary.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture1.png" style="width: 755px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222986iEFEE7A610A3321CC/image-dimensions/755x437?v=v2" width="755" height="437" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Figure 1: VIVO pre-training uses paired image-tag data to learn a rich visual vocabulary where image region features and tags of the same object are aligned. Fine-tuning is conducted on paired image-sentence data that only cover a limited number of objects (in blue). During inference, our model can generalize to describe novel objects (in yellow) that are learnt during VIVO pre-training.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please see this &lt;A href="https://aka.ms/MSRBlogImageCap" target="_blank" rel="noopener"&gt;MSR blog post&lt;/A&gt; to learn more about VIVO pre-training.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Try the Service in Your App&lt;/H2&gt;
&lt;P&gt;Imagine you would like to generate alternative text descriptions for images your users upload to your app. Azure Computer Vision Service with its much improved “describe image” (image captioning) capability can help. Let us take it for a spin.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We will be using Python client library to invoke the service in this blog post. Try these links if you prefer a &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;different language&lt;/A&gt; or invoking the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/curl-analyze" target="_blank" rel="noopener"&gt;REST API&lt;/A&gt; directly. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Prerequisites&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.python.org/downloads/" target="_blank" rel="noopener"&gt;Python&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;An Azure subscription -&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;create one for free&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Once you have your Azure subscription, &lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" target="_self"&gt;create a Computer Vision resource&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Subscription: Pick the subscription you would like to use. If you just created a new Azure subscription, it should be an option in the dropdown menu.&lt;/LI&gt;
&lt;LI&gt;Resource group: Pick an existing one or create a new one.&lt;/LI&gt;
&lt;LI&gt;Region: Pick the region you would like your resource to be in.&lt;/LI&gt;
&lt;LI&gt;Name: Give your resource a unique name.&lt;/LI&gt;
&lt;LI&gt;Pricing tier: You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.&lt;/LI&gt;
&lt;LI&gt;Then click on “&lt;STRONG&gt;Review + create&lt;/STRONG&gt;” to review your choices and click on “&lt;STRONG&gt;Create&lt;/STRONG&gt;” again to deploy the resource&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture2.png" style="width: 665px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222988i7E0ACD0524F8E985/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once your resource is deployed, click&amp;nbsp;“&lt;STRONG&gt;Go to resource&lt;/STRONG&gt;.”&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 385px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222989i654978B497385668/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click on “&lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt;” to get your subscription key and endpoint. You will be needing these for the code sample below.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture4.png" style="width: 703px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222990i9FEF3DBBCCC6B2A7/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture4.png" alt="Picture4.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Install the client&lt;/H3&gt;
&lt;P&gt;You can install the client library with:&lt;/P&gt;
&lt;PRE&gt;pip install --upgrade azure-cognitiveservices-vision-computervision&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create and run the sample&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Copy the following code into a text editor.&lt;/LI&gt;
&lt;LI&gt;Optionally, replace the value of remote_image_url with the URL of a different image for which to generate caption.&lt;/LI&gt;
&lt;LI&gt;Also, optionally, set useRemoteImage to FALSE and set local_image_path to the path of a local image for which to generate caption.&lt;/LI&gt;
&lt;LI&gt;Save the code as a file with an .py extension. For example, describe-image.py.&lt;/LI&gt;
&lt;LI&gt;Open a command prompt window.&lt;/LI&gt;
&lt;LI&gt;At the prompt, use the python command to run the sample. For example, python describe-image.py.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import sys

from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials

# Best practice is to read this key from secure storage, 
# for this example we'll embed it in the code.
subscription_key = "&amp;lt;your subscription key here&amp;gt;"
endpoint = "&amp;lt;your endpoint here&amp;gt;"

# Create the computer vision client
computervision_client = ComputerVisionClient(
    endpoint, CognitiveServicesCredentials(subscription_key))

# Set to False if you want to use local image instead
useRemoteImage = True

if (useRemoteImage):
    # Get caption for a remote image, change to your own image URL as appropriate
    remote_image_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg"
    description_results = computervision_client.describe_image(
        remote_image_url)
else:
    # Get caption for a local image, change to your own local image path as appropriate
    local_image_path = "&amp;lt;replace with local image path&amp;gt;"
    with open(local_image_path, "rb") as image:
        description_results = computervision_client.describe_image_in_stream(
            image)

# Get the first caption (description) from the response
if (len(description_results.captions) == 0):
    image_caption = "No description detected."
else:
    image_caption = description_results.captions[0].text

print("Description of image:", image_caption)
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Related&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://aka.ms/AA99bjt" target="_self"&gt;What’s that? Microsoft AI system describes images as well as people do&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Learn more about other &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision" target="_blank" rel="noopener"&gt;Computer Vision&lt;/A&gt; capabilities.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Oct 2020 00:21:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/ba-p/1667146</guid>
      <dc:creator>boxinli</dc:creator>
      <dc:date>2020-10-15T00:21:12Z</dc:date>
    </item>
  </channel>
</rss>

