<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure AI articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-ai/bg-p/AzureAIBlog</link>
    <description>Azure AI articles</description>
    <pubDate>Fri, 23 Apr 2021 17:25:06 GMT</pubDate>
    <dc:creator>AzureAIBlog</dc:creator>
    <dc:date>2021-04-23T17:25:06Z</dc:date>
    <item>
      <title>Localize your website with Microsoft Translator</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003</link>
      <description>&lt;H1&gt;Web Localization and Ecommerce&lt;/H1&gt;
&lt;P&gt;Using Microsoft Azure Translator service, you can localize your website in a cost-effective way. With the advent of the internet, the world has become a much smaller place. Loads of information are stored and transmitted between cultures and countries, giving us all the ability to learn and grow from each other. Powered by advanced deep learning, Microsoft Azure Translator delivers fast and high-quality neural machine-based language translations, empowering you to break through language barriers and take advantage of all these powerful vehicles of knowledge and data transfer.&lt;/P&gt;
&lt;P&gt;Research shows that 40% of internet users will never buy from websites in a foreign language[1]. Machine translation from Azure, supporting over &lt;A href="https://www.microsoft.com/en-us/translator/business/languages/" target="_blank" rel="noopener"&gt;90 languages and dialects&lt;/A&gt;, helps you go to market faster and reach buyers in their native languages by localizing your web assets: from your marketing pages to user-generated content, and everything in-between.&lt;/P&gt;
&lt;P&gt;Up to 95% of the online content that companies generate is available in only one language. This is because localizing websites, especially beyond the home page, is cost prohibitive outside of the top few markets. As a result, localized content seldom extends one or two clicks beyond a home page. However, with machine translation from Azure Translator Service, content that wouldn’t otherwise be localized can be, and now most of your content can reach customers and partners worldwide.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;How to localize your website in a cost-effective way?&lt;/H1&gt;
&lt;P&gt;The first step is to understand the nature of your website content and classify them. It is critical as each of them needs different levels of localization. There are four types of content: a) static and dynamic, b) generated by you and posted by customer, c) sensitive like ‘Terms of Use’, d) part of UX elements.&lt;/P&gt;
&lt;P&gt;Static content like about the organization, product or service description, user guides, terms of use, etc. can be translated once (or less frequently) offline into all required target languages.&amp;nbsp; Translation results could be cached and served from your webserver. &amp;nbsp;This could substantially reduce the cost of translation.&amp;nbsp; Machine translation models which powers Azure Translator service are regularly updated to improve quality. Hence consider refreshing the translations once a quarter if not every month.&lt;/P&gt;
&lt;P&gt;User generated content like customer reviews, information requests, etc. are dynamic in nature, not all of them requiring translations, and to be translated on need basis only. You could plan for an UX element in the webpage which could initiate translation on need basis. Target language for translation could be identified based on user browser language. Likewise, responses to customer could be translated back into the language of original request or comment.&lt;/P&gt;
&lt;P&gt;Sensitive content like terms of use, company policies, are recommended to do a human review post-machine translation.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Text in UX elements of the webpage like menu, labels in forms, etc. are typically one or two words and have restricted space.&amp;nbsp; Hence recommended to do a UX testing post translation for fit and finish.&amp;nbsp; If necessitates look for alternate translation or human review.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Localization.png" style="width: 687px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274754iD39DDD8C6164BB4E/image-dimensions/687x374?v=v2" width="687" height="374" role="button" title="Localization.png" alt="Localization.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Due to the speed and cost-effective nature that Azure Translator Service provides, you can easily test which localization option is optimal for your business and your users. For example, you may only have the budget to localize in dozens of languages and measure customer traffic in multiple markets in parallel. Using your existing web analytics, you will be able to decide where to invest in human translation in terms of markets, languages, or pages. For example, if the machine translated information passes a defined page view threshold, your system may trigger a human review of that content. In addition, you will still be able to maintain machine translation for other areas, to maintain reach.&lt;/P&gt;
&lt;P&gt;By combining pure machine translation and paid translation resources, you can select different quality levels for the translations based on your business needs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;How to use Azure Translator service to translate static content&lt;/H1&gt;
&lt;P&gt;Pre-requisite:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an &lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;Azure subscription&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Once you have an Azure subscription,&amp;nbsp;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation" target="_blank" rel="noopener"&gt;create a Translator resource&lt;/A&gt;&amp;nbsp;in the Azure portal.&lt;/LI&gt;
&lt;LI&gt;Once Translator resource it created, go to the resource, and select&amp;nbsp;‘Keys and Endpoint’ which is used to connect your application to the Translator service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_2-1619114278286.png" style="width: 394px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274756iF3BCD1903DCE12C5/image-dimensions/394x503?v=v2" width="394" height="503" role="button" title="Krishna_Doss_2-1619114278286.png" alt="Krishna_Doss_2-1619114278286.png" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_3-1619114278302.png" style="width: 495px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274755iDEB45D2A41247B6B/image-dimensions/495x300?v=v2" width="495" height="300" role="button" title="Krishna_Doss_3-1619114278302.png" alt="Krishna_Doss_3-1619114278302.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;U&gt;Translating static webpage content&lt;/U&gt;:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Below code sample shows how to translate an element in the webpage.&amp;nbsp; You could use it and iterate for each element in your webpage requiring translation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import os, requests, uuid, json
subscription_key = "YOUR_SUBSCRIPTION_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com"
path = '/translate'
constructed_url = endpoint + path

params = {
    'api-version': '3.0',
    'to': ['de'], # target language
    'textType': 'html' 
}

headers = {
    'Ocp-Apim-Subscription-Key': subscription_key,
    'Content-type': 'application/json',
    'X-ClientTraceId': str(uuid.uuid4())
}

# You can pass more than one object in body.
body = [{
    "text": "&amp;lt;p&amp;gt;The samples on this page use hard-coded keys and endpoints for simplicity. \
    Remember to &amp;lt;strong&amp;gt;remove the key from your code when you're done&amp;lt;/strong&amp;gt;, and \
    &amp;lt;strong&amp;gt;never post it publicly&amp;lt;/strong&amp;gt;. For production, consider using a secure way of \
    storing and accessing your credentials. See the Cognitive Services security article \
    for more information.&amp;lt;/p&amp;gt;"
}]

request = requests.post(constructed_url, params=params, headers=headers, json=body)
response = request.json()
print (response[0]['translations'][0]['text']) # shows how to access the translated text from response&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Localization is just a fraction of the things that you can do with Translator, so don't let the learning stop here. Check out recent new Translator features, additional doc links to dive deeper, and join the Translator Ask Microsoft Anything session on 4/27.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;U&gt;Get started&lt;/U&gt;:&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sign up for &lt;A href="https://azure.microsoft.com/en-us/free/cognitive-services/" target="_blank" rel="noopener"&gt;Azure trial&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Join Translator engineering team on &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/4-27-21-translator-within-azure-cognitive-services-ama/m-p/2275137" target="_blank" rel="noopener"&gt;Ask Microsoft Anything on 4/27&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn about &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185" target="_blank" rel="noopener"&gt;Document Translation (Preview)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/create-language-translator-mixed-reality-application-unity-azure-cognitive-services/" target="_blank" rel="noopener"&gt;Create a language translator application with Unity and Azure Cognitive Services&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/overview" target="_blank" rel="noopener"&gt;Translator documentation&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&amp;nbsp; CSA Research – Can’t Read, Won’t Buy – B2C Analyzing Consumer Language Preferences and Behaviors in 29 Countries &lt;A href="https://insights.csa-research.com/reportaction/305013126/Marketing" target="_blank" rel="noopener"&gt;https://insights.csa-research.com/reportaction/305013126/Marketing&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Apr 2021 23:57:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003</guid>
      <dc:creator>Krishna_Doss</dc:creator>
      <dc:date>2021-04-22T23:57:02Z</dc:date>
    </item>
    <item>
      <title>Big data preparation in Azure Machine Learning – powered by Azure Synapse Analytics</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671</link>
      <description>&lt;P&gt;Many customers who embark on a machine learning journey deal with big data, and need the power of distributed data processing engines to prepare their data for ML. By offering Apache Spark® (powered by Azure Synapse Analytics) in Azure Machine Learning (Azure ML), we are empowering customers to work on their end-to-end ML lifecycle including large-scale data preparation, featurization, model training, and deployment within Azure ML workspace without the need to switching between multiple tools for data preparation and model training. &lt;SPAN&gt;The ability to build the full ML lifecycle&lt;/SPAN&gt; within Azure ML will reduce the time required for customers to iterate on a machine learning project which typically includes multiple rounds of data preparation and training.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With the preview of managed Apache Spark in Azure ML, customers can use Azure ML notebooks to connect to Spark pools in Azure Synapse Analytics, to do interactive data preparation using&amp;nbsp;PySpark. Customers have the&amp;nbsp;option to configure&amp;nbsp;Spark sessions to quickly experiment and iterate on the data. Once ready, they can leverage Azure ML pipelines to automate their end-to-end ML workflow from data preparation to model deployment all in one environment, &lt;/SPAN&gt;&lt;SPAN&gt;while maintaining their data and model lineage. Customers who prefer to train in the Spark environment can choose to install relevant libraries such as Spark MLlib, MMLSpark, etc. to complete their training on Spark pools.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Customers in preview will be able to benefit from the following key capabilities:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reuse Spark pools from Azure Synapse workspace in Azure ML &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Customers can leverage existing Spark pools from Azure Synapse Analytics (Azure Synapse) in Azure ML by just linking their Azure ML and Synapse workspaces via the Azure ML Studio, the Python SDK, or the ARM template. Customers just need to follow the widget in UI or leverage a few lines of code as described in the documentation &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-link-synapse-ml-workspaces" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273990iC3748239EEF8CA15/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Once the workspaces are linked, customers can &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#attach-synapse-spark-pool-as-a-compute" target="_blank" rel="noopener"&gt;attach existing Spark pools&lt;/A&gt; into Azure ML workspace and can also register the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-access-data#supported-data-storage-service-types" target="_blank" rel="noopener"&gt;supported linked services (data store sources)&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273991i887D228F5BCC4424/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Perform interactive data preparation via Spark magic from Azure ML notebooks &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Customers can use Azure ML notebooks to start Spark sessions in PySpark via Spark Magic on attached Spark pools. Customers can register Azure ML datasets to load data from storage of choice. For data in Gen1 and Gen2, customers can use their own identities to authenticate access to data by leveraging AML datasets. The attached Spark pools can be used normally in Azure ML experiments, pipelines, and designer. More information on &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#launch-synapse-spark-pool-for-data-preparation-tasks" target="_blank" rel="noopener"&gt;leveraging Spark Magic for data preparation on AML notebooks here&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273992i5889A7B6E2B1AC1A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Productionize via Azure ML pipelines to orchestrate E2E ML steps including data preparation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;After completing the interactive data preparation, customers can leverage Azure ML pipelines to automate data preparation on Apache Spark runtime as a step in the overall machine learning workflow. Customers can use the SynapseSparkStep for data preparation and choose either TabularDataset&amp;nbsp;or FileDataset as input. Customers can also set up HDFSOutputDatasetConfig to generate the sparkstep output as a FileDataset, to be consumed by the following AzureML pipeline step. More details on &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-synapsesparkstep#use-the-synapsesparkstep-in-a-pipeline" target="_blank" rel="noopener"&gt;How to use Apache Spark (powered by Azure Synapse) in your machine learning pipeline here&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Get started with big data preparation in Azure ML via Apache Spark powered by Azure Synapse&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Get started by visiting our&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool#launch-synapse-spark-pool-for-data-preparation-tasks" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&amp;nbsp;and let us know your thoughts. We are committed to making the data preparation experience in Azure ML better for you!&lt;/P&gt;
&lt;P&gt;Learn more about the&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;get started with a free trial&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-prep-synapse-spark-pool" target="_blank" rel="noopener"&gt;Learn more about Azure Synapse big data preparation experience in Azure ML&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-synapsesparkstep" target="_blank" rel="noopener"&gt;Learn more about how to use Apache Spark in your machine learning pipelines&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more about &lt;A href="https://spark.apache.org/" target="_blank" rel="noopener"&gt;Apache Spark&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more about &lt;A href="https://azure.microsoft.com/en-us/services/synapse-analytics/" target="_blank" rel="noopener"&gt;Azure Synapse Analytics&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 20 Apr 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671</guid>
      <dc:creator>Xun_Wang</dc:creator>
      <dc:date>2021-04-20T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Analyzing COVID Medical Papers with Azure and Text Analytics for Health</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890</link>
      <description>&lt;H2&gt;Automatic Paper Analysis&lt;/H2&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Automatic&amp;nbsp;scientific&amp;nbsp;paper&amp;nbsp;analysis&amp;nbsp;is&amp;nbsp;fast&amp;nbsp;growing&amp;nbsp;area&amp;nbsp;of&amp;nbsp;studies,&amp;nbsp;and&amp;nbsp;due&amp;nbsp;to&amp;nbsp;recent&amp;nbsp;improvements&amp;nbsp;in&amp;nbsp;NLP&amp;nbsp;techniques&amp;nbsp;is&amp;nbsp;has&amp;nbsp;been&amp;nbsp;greatly&amp;nbsp;improved&amp;nbsp;in&amp;nbsp;the&amp;nbsp;recent&amp;nbsp;years.&amp;nbsp;In&amp;nbsp;this&amp;nbsp;post,&amp;nbsp;we&amp;nbsp;will&amp;nbsp;show&amp;nbsp;you&amp;nbsp;how&amp;nbsp;to&amp;nbsp;derive&amp;nbsp;specific&amp;nbsp;insights&amp;nbsp;from&amp;nbsp;COVID&amp;nbsp;papers,&amp;nbsp;such&amp;nbsp;as&amp;nbsp;changes&amp;nbsp;in&amp;nbsp;medical&amp;nbsp;treatment&amp;nbsp;over&amp;nbsp;time,&amp;nbsp;or&amp;nbsp;joint&amp;nbsp;treatment&amp;nbsp;strategies&amp;nbsp;using&amp;nbsp;several&amp;nbsp;medications:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618308829590.png" style="width: 625px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272390i95B16A643F02EDE5/image-dimensions/625x200?v=v2" width="625" height="200" role="button" title="shwars_1-1618308829590.png" alt="shwars_1-1618308829590.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;main&amp;nbsp;idea&amp;nbsp;the&amp;nbsp;approach&amp;nbsp;I&amp;nbsp;will&amp;nbsp;describe&amp;nbsp;in&amp;nbsp;this&amp;nbsp;post&amp;nbsp;is&amp;nbsp;to&amp;nbsp;extract&amp;nbsp;as&amp;nbsp;much&amp;nbsp;semi-structured&amp;nbsp;information&amp;nbsp;from&amp;nbsp;text&amp;nbsp;as&amp;nbsp;possible,&amp;nbsp;and&amp;nbsp;then&amp;nbsp;store&amp;nbsp;it&amp;nbsp;into&amp;nbsp;some&amp;nbsp;NoSQL&amp;nbsp;database&amp;nbsp;for&amp;nbsp;further&amp;nbsp;processing.&amp;nbsp;Storing&amp;nbsp;information&amp;nbsp;in&amp;nbsp;the&amp;nbsp;database&amp;nbsp;would&amp;nbsp;allow&amp;nbsp;us&amp;nbsp;to&amp;nbsp;make&amp;nbsp;some&amp;nbsp;very&amp;nbsp;specific&amp;nbsp;queries&amp;nbsp;to&amp;nbsp;answer&amp;nbsp;some&amp;nbsp;of&amp;nbsp;the&amp;nbsp;questions,&amp;nbsp;as&amp;nbsp;well&amp;nbsp;as&amp;nbsp;to&amp;nbsp;provide&amp;nbsp;visual&amp;nbsp;exploration&amp;nbsp;tool&amp;nbsp;for&amp;nbsp;medical&amp;nbsp;expert&amp;nbsp;for&amp;nbsp;structured&amp;nbsp;search&amp;nbsp;and&amp;nbsp;insight&amp;nbsp;generation.&amp;nbsp;The&amp;nbsp;overall&amp;nbsp;architecture&amp;nbsp;of&amp;nbsp;the&amp;nbsp;proposed&amp;nbsp;system&amp;nbsp;is&amp;nbsp;shown&amp;nbsp;below:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ta-diagram.png" style="width: 645px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272392iA187E1F819B9C347/image-dimensions/645x142?v=v2" width="645" height="142" role="button" title="ta-diagram.png" alt="ta-diagram.png" /&gt;&lt;/span&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;We will use different Azure technologies to gain insights into the paper corpus, such as&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics for Health&lt;/A&gt;&lt;SPAN&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cosmos-db/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;CosmosDB&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI&lt;/A&gt;&lt;SPAN&gt;. Now let’s focus on individual parts of this diagram and discuss them in detail.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;BLOCKQUOTE&gt;If you want to experiment with text analytics yourself - you will need an Azure Account. You can always get&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;free trial&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;if you do not have one. And you may also want to check out&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;other AI technologies for developers&lt;/A&gt;&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="covid-scientific-papers-and-cord-dataset"&gt;COVID Scientific Papers and CORD Dataset&lt;/H2&gt;
&lt;P&gt;The idea to apply&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding"&gt;NLP&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;methods to scientific literature seems quite natural. First of all, scientific texts are already well-structured, they contain things like keywords, abstract, as well as well-defined terms. Thus, at the very beginning of COVID pandemic, a&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge" target="_blank" rel="noopener"&gt;research challenge has been launched on Kaggle&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to analyze scientific papers on the subject. The dataset behind this competition is called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.semanticscholar.org/cord19" target="_blank" rel="noopener"&gt;CORD&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(&lt;A href="https://arxiv.org/pdf/2004.10706.pdf" target="_blank" rel="noopener"&gt;publication&lt;/A&gt;), and it contains constantly updated corpus of everything that is published on topics related to COVID. Currently, it contains more than 400000 scientific papers, about half of them - with full text.&lt;/P&gt;
&lt;P&gt;This dataset consists of the following parts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Metadata file&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge?select=metadata.csv" target="_blank" rel="noopener"&gt;Metadata.csv&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;contains most important information for all publications in one place. Each paper in this table has unique identifier&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cord_uid&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which in fact does not happen to be completely unique, once you actually start working with the dataset). The information includes:
&lt;UL&gt;
&lt;LI&gt;Title of publication&lt;/LI&gt;
&lt;LI&gt;Journal&lt;/LI&gt;
&lt;LI&gt;Authors&lt;/LI&gt;
&lt;LI&gt;Abstract&lt;/LI&gt;
&lt;LI&gt;Data of publication&lt;/LI&gt;
&lt;LI&gt;doi&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Full-text papers&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;document_parses&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directory, than contain structured text in JSON format, which greatly simplifies the analysis.&lt;/LI&gt;
&lt;LI&gt;Pre-built&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Document Embeddings&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;that maps&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cord_uid&lt;/CODE&gt;s to float vectors that reflect some overall semantics of the paper.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In this post, we will focus on paper abstracts, because they contain the most important information from the paper. However, for full analysis of the dataset, it definitely makes sense to use the same approach on full texts as well.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="what-ai-can-do-with-text"&gt;What AI Can Do with Text?&lt;/H2&gt;
&lt;P&gt;In the recent years, there has been a huge progress in the field of Natural Language Processing, and very powerful neural network language models have been trained. In the area of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding"&gt;NLP&lt;/ABBR&gt;, the following tasks are typically considered:&lt;/P&gt;
&lt;DL&gt;
&lt;DT&gt;Text classification / intent recognition&lt;/DT&gt;
&lt;DD&gt;In this task, we need to classify a piece of text into a number of categories. This is a typical classification task. Sentiment Analysis&lt;/DD&gt;
&lt;DD&gt;We need to return a number that shows how positive or negative the text is. This is a typical regression task. Named Entity Recognition (&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;)&lt;/DD&gt;
&lt;DD&gt;In&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;, we need to extract named entities from text, and determine their type. For example, we may be looking for names of medicines, or diagnoses. Another task similar to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;keyword extraction&lt;/STRONG&gt;.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Text summarization&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;Here we want to be able to produce a short version of the original text, or to select the most important pieces of text.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Question Answering&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;In this task, we are given a piece of text and a question, and our goal is to find the exact answer to this question from text.&lt;/DD&gt;
&lt;DD&gt;&lt;STRONG&gt;Open-Domain Question Answering (&lt;ABBR title="Open Domain Question Answering"&gt;ODQA&lt;/ABBR&gt;)&lt;/STRONG&gt;&lt;/DD&gt;
&lt;DD&gt;The main difference from previous task is that we are given a large corpus of text, and we need to find the answer to our question somewhere in the whole corpus.&lt;/DD&gt;
&lt;/DL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;In&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://soshnikov.com/azure/deep-pavlov-answers-covid-questions/" target="_blank" rel="noopener"&gt;one of my previous posts&lt;/A&gt;, I have described how we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Open Domain Question Answering"&gt;ODQA&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;approach to automatically find answers to specific COVID questions. However, this approach is not suitable for serious research.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;To make some insights from text,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;seems to be the most prominent technique to use. If we can understand specific entities that are present in text, we could then perform semantically rich search in text that answers specific questions, as well as obtain data on co-occurrence of different entities, figuring out specific scenarios that interest us.&lt;/P&gt;
&lt;P&gt;To train&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;model, as well as any other neural language model, we need a reasonably large dataset that is properly marked up. Finding those datasets is often not an easy task, and producing them for new problem domain often requires initial human effort to mark up the data.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="pre-trained-language-models"&gt;Pre-Trained Language Models&lt;/H2&gt;
&lt;P&gt;Luckily, modern&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)" target="_blank" rel="noopener"&gt;transformer language models&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can be trained in semi-supervised manner using transfer learning. First, the base language model (for example,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270" target="_blank" rel="noopener"&gt;&lt;ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model"&gt;BERT&lt;/ABBR&gt;&lt;/A&gt;) is trained on a large corpus of text first, and then can be specialized to a specific task such as classification or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on a smaller dataset.&lt;/P&gt;
&lt;P&gt;This transfer learning process can also contain additional step - further training of generic pre-trained model on a domain-specific dataset. For example, in the area of medical science Microsoft Research has pre-trained a model called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract" target="_blank" rel="noopener"&gt;PubMedBERT&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(&lt;A href="https://arxiv.org/abs/2007.15779" target="_blank" rel="noopener"&gt;publication&lt;/A&gt;), using texts from PubMed repository. This model can then be further adopted to different specific tasks, provided we have some specialized datasets available.&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pubmedbert.png" style="width: 470px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272398i48A36098BF831B6F/image-dimensions/470x352?v=v2" width="470" height="352" role="button" title="pubmedbert.png" alt="pubmedbert.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2 id="text-analytics-cognitive-services"&gt;Text Analytics Cognitive Services&lt;/H2&gt;
&lt;P&gt;However, training a model requires a lot of skills and computational power, in addition to a dataset. Microsoft (as well as some other large cloud vendors) also makes some pre-trained models available through the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Application Programming Interface"&gt;API&lt;/ABBR&gt;. Those services are called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cognitive Services&lt;/A&gt;, and one of those services for working with text is called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/text-analytics/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt;. It can do the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Keyword extraction&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Named Entity Recognition"&gt;NER&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for some common entity types, such as people, organizations, dates/times, etc.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Sentiment analysis&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Language Detection&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Entity Linking&lt;/STRONG&gt;, by automatically adding internet links to some most common entities. This also performs&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;disambiguation&lt;/STRONG&gt;, for example&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;Mars&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can refer to both the planet or a chocolate bar, and correct link would be used depending on the context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For example, let’s have a look at one medical paper abstract analyzed by Text Analytics:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_0-1618309756290.png" style="width: 598px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272399iDC0B993F45A291BE/image-dimensions/598x136?v=v2" width="598" height="136" role="button" title="shwars_0-1618309756290.png" alt="shwars_0-1618309756290.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can see, some specific entities (for example, HCQ, which is short for hydroxychloroquine) are not recognized at all, while others are poorly categorized. Luckily, Microsoft provides special version of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Text Analytics for Health&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="text-analytics-for-health"&gt;Text Analytics for Health&lt;/H2&gt;
&lt;P&gt;Text Analytics for Health is a cognitive service that exposes pre-trained PubMedBert model with some additional capabilities. Here is the result of extracting entities from the same piece of text using Text Analytics for Health:&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618309813758.png" style="width: 625px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272400iD71522A990F161E2/image-dimensions/625x180?v=v2" width="625" height="180" role="button" title="shwars_1-1618309813758.png" alt="shwars_1-1618309813758.png" /&gt;&lt;/span&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Currently, Text Analytics for Health is available as gated preview, meaning that you need to request access to use it in your specific scenario. This is done according to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/ai/responsible-ai?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Ethical AI&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;principles, to avoid irresponsible usage of this service for cases where human health depends on the result of this service. You can request access&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/csgate" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;To perform analysis, we can use recent version&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/textanalytics/azure-ai-textanalytics/README.md" target="_blank" rel="noopener"&gt;Text Analytics Python SDK&lt;/A&gt;, which we need to pip-install first:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install azure.ai.textanalytics==5.1.0b5&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;We need to specify a version of SDK, because otherwise we can have current non-beta version installed, which lacks Text Analytics for Health functionality.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;The service can analyze a bunch of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;text documents&lt;/STRONG&gt;, up to 10 at a time. You can pass either a list of documents, or dictionary. Provided we have a text of abstract in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;txt&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;variable, we can use the following code to analyze it:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;poller = text_analytics_client.begin_analyze_healthcare_entities([txt])
res = list(poller.result())
print(res)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This results in the following object:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-txt"&gt;[AnalyzeHealthcareEntitiesResultItem(
  id=0, entities=[
     HealthcareEntity(text=2019, category=Time, subcategory=None, length=4, offset=20, confidence_score=0.85, data_sources=None, 
        related_entities={HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}): 'TimeOfCondition'}), 
     HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}), 
     HealthcareEntity(text=COVID-19, category=Diagnosis, subcategory=None, length=8, offset=55, confidence_score=1.0, 
        data_sources=[HealthcareEntityDataSource(entity_id=C5203670, name=UMLS), HealthcareEntityDataSource(entity_id=U07.1, name=ICD10CM), HealthcareEntityDataSource(entity_id=10084268, name=MDR), ...
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;As you can see, in addition to just the list of entities, we also get the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Enity Mapping&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of entities to standard medical ontologies, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.nlm.nih.gov/research/umls/index.html" target="_blank" rel="noopener"&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Relations&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;between entities inside the text, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;TimeOfCondition&lt;/CODE&gt;, etc.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Negation&lt;/STRONG&gt;, which indicated that an entity was used in negative context, for example&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;COVID-19 diagnosis did not occur&lt;/EM&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_2-1618309813783.png" style="width: 565px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272401iCED22C0254BFADF5/image-dimensions/565x195?v=v2" width="565" height="195" role="button" title="shwars_2-1618309813783.png" alt="shwars_2-1618309813783.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to using Python SDK, you can also call Text Analytics using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Application Programming Interface"&gt;API&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;directly. This is useful if you are using a programming language that does not have a corresponding SDK, or if you prefer to receive Text Analytics result in the JSON format for further storage or processing. In Python, this can be easily done using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;requests&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;library:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;uri = f"{endpoint}/text/analytics/v3.1-preview.3/entities/
         health/jobs?model-version=v3.1-preview.4"
headers = { "Ocp-Apim-Subscription-Key" : key }
resp = requests.post(uri,headers=headers,data=doc)
res = resp.json()
if res['status'] == 'succeeded':
    result = t['results']
else:
    result = None&lt;/LI-CODE&gt;
&lt;P&gt;&lt;EM&gt;(We need to make sure to use the preview endpoint to have access to text analytics for health)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Resulting JSON file will look like this:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{"id": "jk62qn0z",
 "entities": [
    {"offset": 24, "length": 28, "text": "coronavirus disease pandemic", 
     "category": "Diagnosis", "confidenceScore": 0.98, 
     "isNegated": false}, 
    {"offset": 54, "length": 8, "text": "COVID-19", 
     "category": "Diagnosis", "confidenceScore": 1.0, "isNegated": false, 
     "links": [
       {"dataSource": "UMLS", "id": "C5203670"}, 
       {"dataSource": "ICD10CM", "id": "U07.1"}, ... ]},
 "relations": [
    {"relationType": "Abbreviation", "bidirectional": true, 
     "source": "#/results/documents/2/entities/6", 
     "target": "#/results/documents/2/entities/7"}, ...],
}
&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;In production, you may want to incorporate some code that will retry the operation when an error is returned by the service. For more guidance on proper implementation of cognitive services&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely"&gt;REST&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;clients, you can&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/azure/ai/textanalytics" target="_blank" rel="noopener"&gt;check source code&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of Azure Python SDK, or use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://swagger.io/" target="_blank" rel="noopener"&gt;Swagger&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to generate client code.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="using-cosmosdb-to-store-analysis-result"&gt;Using Cosmos DB to Store Analysis Result&lt;/H2&gt;
&lt;P&gt;Using Python code similar to the one above we can extract JSON entity/relation metadata for each paper abstract. This process takes quite some time for 400K papers, and to speed it up it can be parallelized using technologies such as &lt;A href="https://docs.microsoft.com/azure/batch/?WT.mc_id=aiml-20447-dmitryso" target="_self"&gt;Azure Batch&lt;/A&gt; or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/machine-learning/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;. However, in my first experiment I just run the script on one VM in the cloud, and the data was ready in around 11 hours.&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_3-1618309813793.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272402i1F0BECF51E41517F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_3-1618309813793.png" alt="shwars_3-1618309813793.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Having done this, we have now obtained a collection of papers, each having a number of entities and corresponding relations. This structure is inherently hierarchical, and the best way to store and process it would be to use NoSQL approach for data storage. In Azure,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cosmos-db/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cosmos DB&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is a universal database that can store and query semi-structured data like our JSON collection, thus it would make sense to upload all JSON files to Cosmos DB collection. This can be done using the following code:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;coscli = azure.cosmos.CosmosClient(cosmos_uri, credential=cosmos_key)
cosdb = coscli.get_database_client("CORD")
cospapers = cosdb.get_container_client("Papers")
for x in all_papers_json:
    cospapers.upsert_item(x)&lt;/LI-CODE&gt;
&lt;P&gt;Here,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;all_papers_json&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;is a variable (or generator function) containing individual JSON documents for each paper. We also assume that you have created a Cosmos DB database called ‘CORD’, and obtained required credentials into&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cosmos_uri&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;cosmos_key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;variables.&lt;/P&gt;
&lt;P&gt;After running this code, we will end up with the container&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;Papers&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;will all metadata. We can now work with this container in Azure Portal by going to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Data Explorer&lt;/STRONG&gt;:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_4-1618309813810.png" style="width: 631px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272405i549D361D9A56057D/image-dimensions/631x284?v=v2" width="631" height="284" role="button" title="shwars_4-1618309813810.png" alt="shwars_4-1618309813810.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cosmos-db/sql-query-getting-started/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;Cosmos DB SQL&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in order to query our collection. For example, here is how we can obtain the list of all medications found in the corpus:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- unique medication names
SELECT DISTINCT e.text 
FROM papers p 
JOIN e IN p.entities 
WHERE e.category='MedicationName'&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;Using SQL, we can formulate some very specific queries. Suppose, a medical specialist wants to find out all proposed dosages of a specific medication (say,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;hydroxychloroquine&lt;/STRONG&gt;), and see all papers that mention those dosages. This can be done using the following query:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- dosage of specific drug with paper titles
SELECT p.title, r.source.text
FROM papers p JOIN r IN p.relations 
WHERE r.relationType='DosageOfMedication' 
AND CONTAINS(r.target.text,'hydro')&lt;/LI-CODE&gt;
&lt;P&gt;You can execute this query interactively in Azure Portal, inside Cosmos DB Data Explorer. The result of the query looks like this:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;[
 {
  "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)",
  "text": "400 mg"
 },{
  "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)",
   "text": "maintenance dose"
    },...]&lt;/LI-CODE&gt;
&lt;P&gt;A more difficult task would be to select all entities together with their corresponding ontology ID. This would be extremely useful, because eventually we want to be able to refer to a specific entity (&lt;EM&gt;hydroxychloroquine&lt;/EM&gt;) regardless or the way it was mentioned in the paper (for example,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;HCQ&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;also refers to the same medication). We will use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;as our main ontology.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--- get entities with UMLS IDs
SELECT e.category, e.text, 
  ARRAY (SELECT VALUE l.id 
         FROM l IN e.links 
         WHERE l.dataSource='UMLS')[0] AS umls_id 
FROM papers p JOIN e IN p.entities&lt;/LI-CODE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="creating-interactive-dashboards"&gt;Creating Interactive Dashboards&lt;/H2&gt;
&lt;P&gt;While being able to use SQL query to obtain an answer to some specific question, like medication dosages, seems like a very useful tool - it is not convenient for non-IT professionals, who do not have high level of SQL mastery. To make the collection of metadata accessible to medical professionals, we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;tool to create an interactive dashboard for entity/relation exploration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_5-1618309813826.png" style="width: 597px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272404iF8211DD19E119DA0/image-dimensions/597x533?v=v2" width="597" height="533" role="button" title="shwars_5-1618309813826.png" alt="shwars_5-1618309813826.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the example above, you can see a dashboard of different entities. One can select desired entity type on the left (eg.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Medication Name&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in our case), and observe all entities of this type on the right, together with their count. You can also see associated&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;IDs in the table, and from the example above once can notice that several entities can refer to the same ontology ID (&lt;EM&gt;hydroxychloroquine&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;HCQ&lt;/EM&gt;).&lt;/P&gt;
&lt;P&gt;To make this dashboard, we need to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://powerbi.microsoft.com/desktop/?WT.mc_id=aiml-20447-dmitryso" target="_blank" rel="noopener"&gt;PowerBI Desktop&lt;/A&gt;. First we need to import Cosmos DB data - the tools support direct import of data from Azure.&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_6-1618309813830.png" style="width: 551px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272403iCEB41BA5795F57A2/image-dimensions/551x617?v=v2" width="551" height="617" role="button" title="shwars_6-1618309813830.png" alt="shwars_6-1618309813830.png" /&gt;&lt;/span&gt;
&lt;P&gt;Then we provide SQL query to get all entities with the corresponding&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;IDs - the one we have shown above - and one more query to display all unique categories. Then we drag those two tables to the PowerBI canvas to get the dashboard shown above. The tool automatically understands that two tables are linked by one field named&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;category&lt;/STRONG&gt;, and supports the functionality to filter second table based on the selection in the first one.&lt;/P&gt;
&lt;P&gt;Similarly, we can create a tool to view relations:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_7-1618309813835.png" style="width: 567px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272406iB6EE14130450782C/image-dimensions/567x449?v=v2" width="567" height="449" role="button" title="shwars_7-1618309813835.png" alt="shwars_7-1618309813835.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From this tool, we can make queries similar to the one we have made above in SQL, to determine dosages of a specific medications. To do it, we need to select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;DosageOfMedication&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;relation type in the left table, and then filter the right table by the medication we want. It is also possible to create further drill-down tables to display specific papers that mention selected dosages of medication, making this tool a useful research instrument for medical scientist.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="getting-automatic-insights"&gt;Getting Automatic Insights&lt;/H2&gt;
&lt;P&gt;The most interesting part of the story, however, is to draw some automatic insights from the text, such as the change in medical treatment strategy over time. To do this, we need to write some more code in Python to do proper data analysis. The most convenient way to do that is to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Notebooks embedded into Cosmos DB&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_8-1618309813841.png" style="width: 622px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272408i3CBE604E97DB3587/image-dimensions/622x248?v=v2" width="622" height="248" role="button" title="shwars_8-1618309813841.png" alt="shwars_8-1618309813841.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Those notebooks support embedded SQL queries, thus we are able to execute SQL query, and then get the results into Pandas DataFrame, which is Python-native way to explore data:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;%%sql --database CORD --container Papers --output meds
SELECT e.text, e.isNegated, p.title, p.publish_time,
       ARRAY (SELECT VALUE l.id FROM l 
              IN e.links 
              WHERE l.dataSource='UMLS')[0] AS umls_id 
FROM papers p 
JOIN e IN p.entities
WHERE e.category = 'MedicationName'&lt;/LI-CODE&gt;
&lt;DIV class="language-sql highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;Here we end up with&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;meds&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;DataFrame, containing names of medicines, together with corresponding paper titles and publishing date. We can further group by ontology ID to get frequencies of mentions for different medications:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;unimeds = meds.groupby('umls_id') \
              .agg({'text' : lambda x : ','.join(x), 
                    'title' : 'count', 
                    'isNegated' : 'sum'})
unimeds['negativity'] = unimeds['isNegated'] / unimeds['title']
unimeds['name'] = unimeds['text'] \
                  .apply(lambda x: x if ',' not in x 
                                     else x[:x.find(',')])
unimeds.sort_values('title',ascending=False).drop('text',axis=1)&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;This gives us the following table:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;umls_id&lt;/TH&gt;
&lt;TH&gt;title&lt;/TH&gt;
&lt;TH&gt;isNegated&lt;/TH&gt;
&lt;TH&gt;negativity&lt;/TH&gt;
&lt;TH&gt;name&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;C0020336&lt;/TD&gt;
&lt;TD&gt;4846&lt;/TD&gt;
&lt;TD&gt;191&lt;/TD&gt;
&lt;TD&gt;0.039414&lt;/TD&gt;
&lt;TD&gt;hydroxychloroquine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0008269&lt;/TD&gt;
&lt;TD&gt;1870&lt;/TD&gt;
&lt;TD&gt;38&lt;/TD&gt;
&lt;TD&gt;0.020321&lt;/TD&gt;
&lt;TD&gt;chloroquine&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C1609165&lt;/TD&gt;
&lt;TD&gt;1793&lt;/TD&gt;
&lt;TD&gt;94&lt;/TD&gt;
&lt;TD&gt;0.052426&lt;/TD&gt;
&lt;TD&gt;Tocilizumab&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C4726677&lt;/TD&gt;
&lt;TD&gt;1625&lt;/TD&gt;
&lt;TD&gt;24&lt;/TD&gt;
&lt;TD&gt;0.014769&lt;/TD&gt;
&lt;TD&gt;remdesivir&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0052796&lt;/TD&gt;
&lt;TD&gt;1201&lt;/TD&gt;
&lt;TD&gt;84&lt;/TD&gt;
&lt;TD&gt;0.069942&lt;/TD&gt;
&lt;TD&gt;azithromycin&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;TD&gt;…&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;C0067874&lt;/TD&gt;
&lt;TD&gt;1&lt;/TD&gt;
&lt;TD&gt;0&lt;/TD&gt;
&lt;TD&gt;0.000000&lt;/TD&gt;
&lt;TD&gt;1-butanethiol&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From this table, we can select the top-15 most frequently mentioned medications:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;top = { 
    x[0] : x[1]['name'] for i,x in zip(range(15),
      unimeds.sort_values('title',ascending=False).iterrows())
}&lt;/LI-CODE&gt;
&lt;P&gt;To see how frequency of mentions for medications changed over time, we can average out the number of mentions for each month:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;# First, get table with only top medications 
imeds = meds[meds['umls_id'].apply(lambda x: x in top.keys())].copy()
imeds['name'] = imeds['umls_id'].apply(lambda x: top[x])

# Create a computable field with month
imeds['month'] = imeds['publish_time'].astype('datetime64[M]')

# Group by month
medhist = imeds.groupby(['month','name']) \
          .agg({'text' : 'count', 
                'isNegated' : [positive_count,negative_count] })&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&lt;SPAN style="font-family: inherit;"&gt;This gives us the DataFrame that contains number of positive and negative mentions of medications for each month. From there, we can plot corresponding graphs using&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;matplotlib&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;medh = medhist.reset_index()
fig,ax = plt.subplots(5,3)
for i,n in enumerate(top.keys()):
    medh[medh['name']==top[n]] \
    .set_index('month')['isNegated'] \
    .plot(title=top[n],ax=ax[i//3,i%3])
fig.tight_layout()&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_9-1618309813852.png" style="width: 636px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272407iE9F521F29AE64C09/image-dimensions/636x259?v=v2" width="636" height="259" role="button" title="shwars_9-1618309813852.png" alt="shwars_9-1618309813852.png" /&gt;&lt;/span&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="visualizing-terms-co-occurrence"&gt;Visualizing Terms Co-Occurrence&lt;/H2&gt;
&lt;P&gt;Another interesting insight is to observe which terms occur frequently together. To visualize such dependencies, there are two types of diagrams:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sankey diagram&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;allows us to investigate relations between two types of terms, eg. diagnosis and treatment&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Chord diagram&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;helps to visualize co-occurrence of terms of the same type (eg. which medications are mentioned together)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To plot both diagrams, we need to compute&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;co-occurrence matrix&lt;/STRONG&gt;, which in the row&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;i&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and column&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;j&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;contains number of co-occurrences of terms&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;i&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;j&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the same abstract (one can notice that this matrix is symmetric). The way we compute it is to manually select relatively small number of terms for our ontology, grouping some terms together if needed:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;treatment_ontology = {
 'C0042196': ('vaccination',1),
 'C0199176': ('prevention',2),
 'C0042210': ('vaccines',1), ... }

diagnosis_ontology = {
 'C5203670': ('COVID-19',0),
 'C3714514': ('infection',1),
 'C0011065': ('death',2),
 'C0042769': ('viral infections',1),
 'C1175175': ('SARS',3),
 'C0009450': ('infectious disease',1), ...}&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&lt;SPAN style="font-family: inherit;"&gt;Then we define a function to compute co-occurrence matrix for two categories specified by those ontology dictionaries:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&lt;LI-CODE lang="python"&gt;def get_matrix(cat1, cat2):
    d1 = {i:j[1] for i,j in cat1.items()}
    d2 = {i:j[1] for i,j in cat2.items()}
    s1 = set(cat1.keys())
    s2 = set(cat2.keys())
    a = np.zeros((len(cat1),len(cat2)))
    for i in all_papers:
        ent = get_entities(i)
        for j in ent &amp;amp; s1:
            for k in ent &amp;amp; s2 :
                a[d1[j],d2[k]] += 1
    return a&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;Here&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;get_entities&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;function returns the list of&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR style="font-family: inherit;" title="Unified Medical Language System - one of standard ontologies used in medical domain"&gt;UMLS&lt;/ABBR&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;IDs for all entities mentioned in the paper, and&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;all_papers&lt;/CODE&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;is the generator that returns the complete list of paper abstracts metadata.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;To actually plot the Sankey diagram, we can use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://plotly.com/python/" target="_blank" rel="noopener"&gt;Plotly&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;graphics library. This process is well described&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://plotly.com/python/sankey-diagram/" target="_blank" rel="noopener"&gt;here&lt;/A&gt;, so I will not go into further details. Here are the results:&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_10-1618309813867.png" style="width: 657px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272411i9CA3DE07AC0D98A4/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_10-1618309813867.png" alt="shwars_10-1618309813867.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_11-1618309813875.png" style="width: 657px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272410i9BA843A2C8DAEE3A/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_11-1618309813875.png" alt="shwars_11-1618309813875.png" /&gt;&lt;/span&gt;
&lt;P&gt;Plotting a chord diagram cannot be easily done with Plotly, but can be done with a different library -&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://pypi.org/project/chord/" target="_blank" rel="noopener"&gt;Chord&lt;/A&gt;. The main idea remains the same - we build co-occurrence matrix using the same function described above, passing the same ontology twice, and then pass this matrix to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;Chord&lt;/CODE&gt;:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def chord(cat):
    matrix = get_matrix(cat,cat)
    np.fill_diagonal(matrix,0)
    names = cat.keys()
    Chord(matrix.tolist(), names, font_size = "11px").to_html()&lt;/LI-CODE&gt;
&lt;DIV class="language-python highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;The results of chord diagrams for treatment types and medications are below:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="highlight"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_12-1618309813883.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272409iF91465DB62F52534/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_12-1618309813883.png" alt="shwars_12-1618309813883.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_13-1618309813895.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272412iF38991534E116039/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shwars_13-1618309813895.png" alt="shwars_13-1618309813895.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Treatment types&lt;/TD&gt;
&lt;TD&gt;Medications&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Diagram on the right shows which medications are mentioned together (in the same abstract). We can see that well-known combinations, such as&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;hydroxychloroquine + azitromycin&lt;/STRONG&gt;, are clearly visible.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="conclusion"&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;In this post, we have described the architecture of a proof-of-concept system for knowledge extraction from large corpora of medical texts. We use Text Analytics for Health to perform the main task of extracting entities and relations from text, and then a number of Azure services together to build a query took for medical scientist and to extract some visual insights. This post is quite conceptual at the moment, and the system can be further improved by providing more detailed drill-down functionality in PowerBI module, as well as doing more data exploration on extracted entity/relation collection. It would also be interesting to switch to processing full-text articles as well, in which case we need to think about slightly different criteria for co-occurrence of terms (eg. in the same paragraph vs. the same paper).&lt;/P&gt;
&lt;P&gt;The same approach can be applied in other scientific areas, but we would need to be prepared to train a custom neural network model to perform entity extraction. This task has been briefly outlined above (when we talked about the use of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model"&gt;BERT&lt;/ABBR&gt;), and I will try to focus on it in one of my next posts. Meanwhile, feel free to reach out to me if you are doing similar research, or have any specific questions on the code and/or methodology.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 13 Apr 2021 19:42:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890</guid>
      <dc:creator>shwars</dc:creator>
      <dc:date>2021-04-13T19:42:11Z</dc:date>
    </item>
    <item>
      <title>Learn about Bot Framework Composer’s new authoring experience and deploy your bot to a telephone</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Customer expectations continue to increase,&amp;nbsp;looking for&amp;nbsp;immediate response and rapid issue resolution, across multiple&amp;nbsp;channels&amp;nbsp;24/7.&amp;nbsp;Nowhere is this more apparent than the contact center, with this&amp;nbsp;landscape&amp;nbsp;is&amp;nbsp;driving the need for&amp;nbsp;efficiencies, such as reducing&amp;nbsp;call&amp;nbsp;handling times&amp;nbsp;and increasing call deflection rates&amp;nbsp;– all whilst aiming to deliver a&amp;nbsp;personalized and tailored&amp;nbsp;customer experience.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To help&amp;nbsp;respond to this need,&amp;nbsp;we announced&amp;nbsp;the public preview of the telephony channel for Azure Bot Service&amp;nbsp;in February 2021,&amp;nbsp;expanding&amp;nbsp;the already significant number of touch points&amp;nbsp;offered by the service, to include&amp;nbsp;this&amp;nbsp;increasingly&amp;nbsp;critical method of communication.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Built on&amp;nbsp;state-of-the-art speech&amp;nbsp;services&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The&amp;nbsp;new telephony channel, combined with our&amp;nbsp;Bot Framework&amp;nbsp;developer&amp;nbsp;platform,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;makes it easy to&amp;nbsp;rapidly&amp;nbsp;build &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;always-available &lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;virtual&amp;nbsp;assistants, or IVR assistants,&amp;nbsp;that provide&amp;nbsp;natural language&amp;nbsp;intent-based call handling&amp;nbsp;and the ability to&amp;nbsp;handle advanced conversation&amp;nbsp;flows, such as context switching&amp;nbsp;and&amp;nbsp;responding to&amp;nbsp;follow up questions&amp;nbsp;and still meeting the&amp;nbsp;goal of&amp;nbsp;reducing operational costs for enterprises.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This new capability&amp;nbsp;combines several of our&amp;nbsp;Azure&amp;nbsp;and AI services, including&amp;nbsp;our &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;state-of-the-art &lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;Cognitive Speech Service,&amp;nbsp;enabling fluid, natural-sounding speech that matches the patterns and intonation of human voices&amp;nbsp;through&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Text-to-Speech neural voices&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;with&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/services/communication-services/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Communications Services&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;powering&amp;nbsp;various&amp;nbsp;calling&amp;nbsp;capabilities.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The channel also&amp;nbsp;provides&amp;nbsp;support&amp;nbsp;for&amp;nbsp;full duplex conversations&amp;nbsp;and&amp;nbsp;streaming audio over PSTN, support for DTMF,&amp;nbsp;barge-in&amp;nbsp;(allowing a caller to interrupt the virtual&amp;nbsp;assistant)&amp;nbsp;and more.&amp;nbsp;Follow our roadmap and try out one of our samples on the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/botframework-telephony" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Telephony channel GitHub repository&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Improving our Conversational AI SDK and tools for&amp;nbsp;speech experiences&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To&amp;nbsp;compliment the introduction of the telephony channel and ensure our customers can create industry leading experiences, we have&amp;nbsp;added new features to Bot Framework Composer,&amp;nbsp;an&amp;nbsp;open-source&amp;nbsp;conversational&amp;nbsp;authoring&amp;nbsp;tool, featuring a visual canvas,&amp;nbsp;built on top of the Bot Framework SDK,&amp;nbsp;allowing you&amp;nbsp;to extend and customize the conversation with code and pre-built components.&amp;nbsp; Updates to Composer to support speech experiences include,&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;The ability to add tailored speech responses&amp;nbsp;in seconds, either for a voice only or multi-modal (text and speech)&amp;nbsp;agent.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Addition of global application settings for your bot, allowing you to set a consistent voice font to be used on speech enabled channels, including taking care of setting the required base SSML tags.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Authoring UI&amp;nbsp;helpers that allow you to&amp;nbsp;add additional&amp;nbsp;common SSML (&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Speech&amp;nbsp;Synthesis&amp;nbsp;Markup Language&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;)&amp;nbsp;tags to control the intonation, speed and even the style of the voice used,&amp;nbsp;including new styles available for our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;neural voice fonts&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;, such as&amp;nbsp;a dedicated Customer Service style.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Comprehensive Contact Center solution through Dynamics 365&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft announced&amp;nbsp;the expansion of Microsoft Dynamics 365 Customer Service omnichannel capabilities to include a new voice channel,&amp;nbsp;that is built on this telephony channel&amp;nbsp;infrastructure.&amp;nbsp;With&amp;nbsp;native&amp;nbsp;voice, businesses receive seamless, end-to-end&amp;nbsp;experiences within a single solution, ensuring consistent, personalized, and connected support across all channels of engagement.&amp;nbsp;This&amp;nbsp;new voice channel for Customer Service enables an all-in-one customer service solution without fragmentation or manual data integration&amp;nbsp;required, and&amp;nbsp;enables a faster time to value.&amp;nbsp;Learn&amp;nbsp;more&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://cloudblogs.microsoft.com/dynamics365/bdm/2020/09/23/new-voice-channel-streamlines-omnichannel-customer-experiences/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;here&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Get started building for telephony!&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Sign up for&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/free/cognitive-services/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Azure trial&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Join&amp;nbsp;us on &lt;A href="https://www.youtube.com/watch?v=kdA6zAnCXzM" target="_self"&gt;live stream of AI Show&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;on 4/16 11AM&amp;nbsp;PDT&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Sign up for&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/bd-p/AzureAIAMA" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;conversational AI Ask Microsoft Anything (4/28)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;To&amp;nbsp;get started&amp;nbsp;developing a virtual agent, that you can surface via the new telephony channel today, download&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/trycomposer" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Bot Framework Composer&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Read more about the telephony channel&amp;nbsp;preview,&amp;nbsp;including&amp;nbsp;documentation and samples, visit the Bot Framework telephony channel&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/botframework-telephony" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;GitHub&amp;nbsp;repository&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 14 Apr 2021 16:41:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739</guid>
      <dc:creator>KelvinChen</dc:creator>
      <dc:date>2021-04-14T16:41:33Z</dc:date>
    </item>
    <item>
      <title>Introducing Multivariate Anomaly Detection</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679</link>
      <description>&lt;P&gt;Microsoft partners and customers have been building metrics monitoring solutions for AIOps and predictive maintenance, by leveraging the easy-to-use time-series anomaly detection Cognitive Service: Anomaly Detector. Because of its ability to analyze time-series individually, Anomaly Detector is benefiting the industry with its simplicity and scalability.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;What's new&lt;/H2&gt;
&lt;P&gt;We are pleased to announce the new multi-variate capability of Anomaly Detector. The new multivariate anomaly detection APIs in Anomaly Detector further enable developers to easily integrate advanced AI of detecting anomalies from groups of metrics into their applications without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals are now counted as key factors. The new feature protects your mission-critical systems and physical assets, such as software applications, servers, factory machines, spacecraft, or even your business, from failures with a holistic view.&lt;/P&gt;
&lt;P&gt;Imagine 20 sensors from an auto engine generating 20 different signals, e.g., vibration, temperature, etc. The readings of those signals individually may not tell you much on system-level issues, but together, could represent the health of the engine. When the synergy of those signals turns odd, the multivariate anomaly detection feature can sense the anomaly like a seasoned floor expert. Moreover, the AI models are trained and customized for your data such that it understands your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time-series anomaly detection capabilities as well as the interpretability of the anomalies into predictive maintenance solutions, or AIOps monitoring solutions for complex enterprise software, or business intelligence tools.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Customer love&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Siemens.png" style="width: 197px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270943iF799F4859624796C/image-size/small?v=v2&amp;amp;px=200" role="button" title="Siemens.png" alt="Siemens.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;“Medical device production demands unprecedented precision. For this reason, the Siemens Healthineers team uses Multivariate Anomaly Detector (MVAD) in medical device stress tests during the final inspection in the production. We found MVAD easy to use and work almost out of the box with promising performance. With the ready-to-use model, we don't need to develop a custom AD model, which ensures a short time to market. We plan to expand this technology also to other use cases. It is made easy due to good integration into our ML platform and processes.” - Dr. Jens Fürst, Head Digitalization and Automation at Siemens Healthineers&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Airbus.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270947iB01CC75155D882ED/image-size/small?v=v2&amp;amp;px=200" role="button" title="Airbus.jpg" alt="Airbus.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;To better understand the health and condition of the aircraft and foresee and fix potential problems before they occur, Airbus deployed Anomaly Detector, part of Cognitive Services, to gather and analyze the telemetry data. It began as a proof of concept of the aircraft-monitoring application by loading telemetry data from multiple flights for analysis and model training. “Early tests have shown that for many cases, the out-of-the-box solution works beautifully, which helps us deploy our solutions faster. I would say that we save up to three months on development for our smaller use cases with Anomaly Detector.” &lt;BR /&gt;Marcel Rummens: Product Owner of Internal AI Platform, Airbus&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;AI horsepower&lt;/H2&gt;
&lt;P&gt;Time-series anomaly detection is an important research topic in data mining and has a wide range of applications in the industry. Efficient and accurate anomaly detection helps companies to monitor their key metrics continuously and alert for potential incidents on time. In many real-world applications like predictive maintenance and SpaceOps, multiple time-series metrics are collected to reflect the health status of a system. Univariate time-series anomaly detection algorithms can find anomalies for a single metric. However, it could be problematic in deciding whether the whole system is running normally. For example, sudden changes of a certain metric do not necessarily mean failures of the system. As shown in Figure 1, there are obvious boosts in the volume of TIMESERIES RECEIVED and DATA RECEIVED ON FLINK in the green segment, but the system is still in a healthy state as these two features share a consistent tendency. However, in the red segment, GC shows an inconsistent pattern with other metrics, indicating a problem in garbage collection. Consequently, it is essential to take the correlations between different time series into consideration in a multivariate time-series anomaly detection system.&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="figure1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270976iC877B155C1E6BCA6/image-size/large?v=v2&amp;amp;px=999" role="button" title="figure1.png" alt="Fig.1" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Fig.1&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this newly introduced feature, we productized a novel framework — MTAD-GAT (Multivariate Time-series Anomaly Detection via Graph Attention Network), to tackle the limitations of previous solutions. Our method considers each univariate time-series as an individual feature and tries to model the correlations between different features explicitly, while the temporal dependencies within each time-series are modeled at the same time. The key ingredients in our model are two graph attention layers, namely the feature-oriented graph attention layer and the time-oriented graph attention layer. The feature-oriented graph attention layer captures the causal relationships between multiple features, and the time-oriented graph attention layer underlines the dependencies along the temporal dimension. In addition, we jointly train a forecasting-based model and a reconstruction-based model for better representations of time-series data. The two models can be optimized simultaneously by a joint objective function.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="maga.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270978iFB7524395292661F/image-size/large?v=v2&amp;amp;px=999" role="button" title="maga.png" alt="maga.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;The magic behind the scenes can be summarized as follows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A novel framework to solve the multivariate time-series anomaly detection problem in a self-supervised manner. Our model shows superior performances on two public datasets and establishes state-of-the-art scores in the literature.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;For the first time, we leverage two parallel graph attention (GAT) layers to learn the relationships between different time-series and timestamps dynamically. Especially, our model captures the correlations between different time-series successfully without any prior knowledge.&lt;/LI&gt;
&lt;LI&gt;We integrate the advantages of both forecasting-based and reconstruction-based models by introducing a joint optimization target. The forecasting-based model focuses on single-timestamp prediction, while the reconstruction-based model learns a latent representation of the entire time-series.&lt;/LI&gt;
&lt;LI&gt;Our network has good interpretability. We analyze the attention scores of multiple time-series learned by the graph attention layers, and the results correspond reasonably well to human intuition. We also show its capability of anomaly diagnosis.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Multivariate anomaly detection API overview&lt;/H2&gt;
&lt;P&gt;This new feature has a different workflow compared with the existing univariate feature. There are two phases to obtain the detection results, the training phase, and the inference phase. In the training phase, you need to provide some historical data to let the model learn past patterns. Then in the inference phase, you can call the inference API to acquire detection results of multivariate time-series in a given range.&lt;/P&gt;
&lt;TABLE width="691"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;&lt;STRONG&gt;APIs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;&lt;STRONG&gt;Functionality&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Create and train model using training data&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models/{modelid}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Get model info including training status and parameters used in the model&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models[?$skip][&amp;amp;$top]&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;List models of a subscription&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/models/{modelid}/detect&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Submit inference task with user's data, this is async&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;/multivariate/results/{resultid}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Get anomalies + root causes (the contribution scores of each variate for each incident)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models/{modelId}&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Delete an existing multivariate model according to the modelId&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="299"&gt;
&lt;P&gt;multivariate/models/{modelId}/export&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="392"&gt;
&lt;P&gt;Export Multivariate Anomaly Detection Model as Zip file&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started!&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/" target="_blank" rel="noopener"&gt;Learning more from our documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;QuickStarts: &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158805" target="_blank" rel="noopener"&gt;C#,&lt;/A&gt; &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158900" target="_blank" rel="noopener"&gt;Python&lt;/A&gt;, &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158901" target="_blank" rel="noopener"&gt;JavaScript&lt;/A&gt;, &lt;A href="https://go.microsoft.com/fwlink/?linkid=2158901" target="_blank" rel="noopener"&gt;Java&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;Artificial Intelligence for developers&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Apr 2021 15:11:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679</guid>
      <dc:creator>Tony_Xing</dc:creator>
      <dc:date>2021-04-12T15:11:56Z</dc:date>
    </item>
    <item>
      <title>Supercharge Azure ML code development with new VS Code integration</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure Machine Learning (Azure ML) team is excited to announce the release of an enhanced developer experience for ‘compute instance’ and ‘notebooks’ users, through a VS Code integration in the Azure ML Studio! It is now easier than ever to work directly on your Azure ML compute instances from within Visual Studio Code, &lt;STRONG&gt;,&lt;/STRONG&gt; and with full access to a remote terminal, your favorite VS Code extensions, Git source control UI, and a debugger.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="vscode-small.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270923i4A4EF83FBEBCE7EB/image-size/large?v=v2&amp;amp;px=999" role="button" title="vscode-small.gif" alt="vscode-small.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bringing VS Code to Azure Machine Learning&lt;/H2&gt;
&lt;P&gt;The Azure Machine Learning and VS Code teams have been working in collaboration over the past couple of months to better understand user workflows for authoring, editing, and managing code files. The demand for VS Code became clear after speaking to a wide variety of users tasked with managing larger projects and operationalizing their models. Users were eager to continue working on their Azure ML compute resources and retain the development context initially defined through the Studio UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first step to enabling a better editing experience for users was to evaluate what was currently used in VS Code. Users were familiar with extensions such as &lt;A href="https://code.visualstudio.com/docs/remote/ssh" target="_blank" rel="noopener"&gt;Remote-SSH&lt;/A&gt; and , the former used to connect to their remote compute and the latter to author notebook files. The advantage of using Jupyter, JupyterLab, or &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906" target="_blank" rel="noopener"&gt;Azure ML notebooks&lt;/A&gt; was that they could be used for all compute instance types without requiring any additional configuration or networking changes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To enable users to work against their compute instances without requiring SSH or additional networking changes, the Azure ML and VS Code teams built a &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630" target="_blank" rel="noopener"&gt;Notebook-specific compute instance connect experience&lt;/A&gt;. The Azure ML extension was responsible for facilitating the connection between VS Code – Jupyter and the compute instance, taking care of authenticating on the user’s behalf. After a month or so of releasing this capability, it was clear that users were excited about connectivity without SSH and being able to work from directly within VS Code. However, working in the editor implied expectations around being able to use other VS code features such as the remote terminal, debugger, and language server. Users expressed their frustration with being limited to working in a single Notebook file, being unable to view files on the remote server, and not being able to use their preferred extensions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;VS Code Integration: Features&lt;/H2&gt;
&lt;P&gt;Learning from prior releases and talking to users led the Azure ML and VS code teams, to build a &lt;STRONG&gt;complete VS Code experience&lt;/STRONG&gt; for compute instances&amp;nbsp;&lt;STRONG&gt;without using SSH&lt;/STRONG&gt;. Getting started with this experience is trivial – entry points have been integrated within the &lt;A href="http://ml.azure.com" target="_blank" rel="noopener"&gt;Azure ML Studio&lt;/A&gt; in both the Compute Instance and Notebooks tabs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="compute-entry-point.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270905i713BCB471336A361/image-size/large?v=v2&amp;amp;px=999" role="button" title="compute-entry-point.png" alt="Studio UI Compute Entry Point" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Studio UI Compute Entry Point&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="notebooks-entry-point.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270907i6BE6C0DEA9E84831/image-size/large?v=v2&amp;amp;px=999" role="button" title="notebooks-entry-point.png" alt="Studio UI Notebooks Entry Point" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Studio UI Notebooks Entry Point&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Through this VS Code integration customers will now have access to the following features and benefits:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Full integration with &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-files" target="_self"&gt;Azure ML file share and notebooks&lt;/A&gt;:&lt;/STRONG&gt; All file operations in VS Code are fully synced with the Azure ML Studio. For example, if a user drags and drops files from their local machine into VS Code connected to Azure ML, all files will be synced and appear in the Azure ML Studio.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/Docs/editor/versioncontrol#_git-support" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Git UI Experiences&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Fully manage Git repos in Azure ML with the rich VS Code source control UI.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/python/jupyter-support" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Notebook Editor&lt;/STRONG&gt;&lt;/A&gt;: Seamlessly click out from the Azure ML notebooks and continue to work on notebooks in the new native VS code editor.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/python/debugging" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Debugging&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Use the native debugging in VS Code to debug any training script before submitting it to an Azure ML cluster for batch training.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/editor/integrated-terminal" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;VS Code Terminal&lt;/STRONG&gt;&lt;/A&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Work in the VS Code terminal that is fully connected to the compute instance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://code.visualstudio.com/docs/editor/extension-gallery" target="_self"&gt;VS Code Extension Support&lt;/A&gt;:&lt;/STRONG&gt; All VS Code extensions are fully supported in VS Code connected to the compute instance.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="font-family: inherit;"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security" target="_self"&gt;Enterprise Support&lt;/A&gt;:&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; Work with VS Code securely in private endpoints without additional, complicated SSH and networking configuration. AAD credentials and RBAC are used to establish a secure connection to VNET/private link enabled Azure ML workspaces.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;VS Code Integration: How it Works&lt;/H2&gt;
&lt;P&gt;Clicking out to VS Code will launch a desktop VS Code session which initiates a secondary remote connection to the target compute. Within the remote connection window, the Azure ML extension creates a WebSocket connection between your local VS Code client and the remote compute instance.&lt;/P&gt;
&lt;P&gt;The connected window now provides you with:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Access to the mounted file share, with consistent syncing between what is seen in Jupyter* and the Azure ML Notebooks experience.&lt;/LI&gt;
&lt;LI&gt;Access to the machine’s local SSD in case you would like to clone and manage repos outside of the shared file share.&lt;/LI&gt;
&lt;LI&gt;The ability to manage repositories through the source control UI.&lt;/LI&gt;
&lt;LI&gt;The ability to create, interact and debug running applications.&lt;/LI&gt;
&lt;LI&gt;A remote terminal for executing commands directly against the remote compute.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Below is a high-level overview of the remote connection&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="remote-connect-hl-arch.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270909iF0AB1D7C0143EE92/image-size/large?v=v2&amp;amp;px=999" role="button" title="remote-connect-hl-arch.png" alt="Remote Connection Architecture Diagram (High-Level)" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Remote Connection Architecture Diagram (High-Level)&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This new connect capability and direct integration in the Azure ML Studio creates a better-together experience between Azure ML and VS Code! When working on your machine learning projects you can get started with a notebook in the Azure ML Studio for early data prep and exploratory work, when you’re ready to start fleshing out the rest of your project, work on multiple file types, and use more advanced editing capabilities and VS Code extension, you can seamlessly transition over to working in VS Code. The retained context and file share usage enables you to move bi-directionally (from notebooks to VS Code and vice-versa) without requiring additional work.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;You can initiate the connection to VS Code directly from the Studio UI through either the Compute Instance or Notebook pages. Alternatively, there are routes starting directly within VS Code if you would prefer. Given you have the &lt;A href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;Azure Machine Learning extension&lt;/A&gt; installed, you can find the compute instance in the tree view and right-click on it to connect. You can also invoke the command “Azure ML: Connect to compute instance” and follow the prompts to initiate the connection.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-command.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270910iB2F852D3AA9A8056/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci-command.png" alt="Azure ML extension command" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure ML extension command&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-context-menu.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270911i0915A8CE80FADF31/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci-context-menu.png" alt="Azure ML extension tree view context menu option" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure ML extension tree view context menu option&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more details on how you can get started with this experience, please take a look at our &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-vs-code-remote?tabs=extension" target="_blank" rel="noopener"&gt;public documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Both the Azure ML and VS Code extension teams are always looking for feedback on our current experiences and what we should work on next. If there is anything you would like us to prioritize, please feel free to suggest so via our &lt;A href="https://github.com/microsoft/vscode-tools-for-ai/issues" target="_blank" rel="noopener"&gt;GitHub repo&lt;/A&gt;; if you would like to provide more general feedback, please &lt;A href="https://aka.ms/aml-ext-survey" target="_blank" rel="noopener"&gt;fill out our survey&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 08 Apr 2021 15:25:26 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129</guid>
      <dc:creator>Sid_Unnithan</dc:creator>
      <dc:date>2021-04-08T15:25:26Z</dc:date>
    </item>
    <item>
      <title>Eleven more languages are generally available for Azure Neural Text-to-Speech</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored with Lihui Wang, Gang Wang, Xinfeng Chen, Qinying Liao, Garfield He and Sheng Zhao&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt; (Neural TTS), part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. Today we are happy to announce that 6 new languages were added to the Neural TTS portfolio with 12 voices available, and the 10 voices in preview with 5 languages are now generally available.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Six new languages&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;12 voices from 6 brand-new languages, with one male and one female voice in each language are available now: Nia in ‘cy-GB’ Welsh (United Kingdom), Aled in ‘cy-GB’ Welsh (United Kingdom), Rosa in ‘en-PH’ English (Philippines), James in ‘en-PH’ English (Philippines), Charline in ‘fr-BE’ French (Belgium), Gerard in ‘fr-BE’ French (Belgium), Dena in ‘’nl-BE Dutch (Belgium), Arnaud in ‘nl-BE’ Dutch (Belgium), Polina in ‘uk-UA’ Ukranian (Ukraine), Ostap in ‘uk-UA’ Ukranian (Ukraine), Uzma in ‘ur-PK’ Urdu (Pakistan), and Asad in ‘ur-PK’ Urdu (Pakistan).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the samples below or try them with your own text in our&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;product demo on Azure&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;&lt;STRONG&gt;Audio sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;cy-GB&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Welsh (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;cy-GB-NiaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Mae'r ysgol ar agor drwy'r wythnos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210223-0759551088/TTS-NiaNeural-Waves-Shortsentence-00002.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=b3aatrBz8UIddVDkuFSOc9N2KlGs2dtcIVHxd5HwShU%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;cy-GB&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Welsh (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;cy-GB-AledNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Mae Bangor 8 milltir o Gaernarfon.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210222-0949572958/TTS-AledNeural-Waves-GeneralSentence-00009.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=REoamfTScigj6NINsMxw6XxclSTCD5CyTNJ14CUVvrA%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;en-PH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;English (Philippines)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;en-PH-RosaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;I need to buy a mineral water.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1015010108/TTS-RosaNeural-Waves-GeneralSentence-00058.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=nalnHnLzKCpXrVqEcGz6RBuG1BTwEbyfhk0iRjXEUz4%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;en-PH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;English (Philippines)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;en-PH-JamesNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Let's meet tomorrow at 6 pm.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1019419930/TTS-JamesNeural-Waves-GeneralSentence-00031.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=yrVpXhdhhk25%2FjYhZCJc45aKfrwp1C%2FY8QdHUyhILWU%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;fr-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;French (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;fr-BE-CharlineNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;On se voit pour dîner demain ?&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1008227048/TTS-CharlineNeural-Waves-GeneralSentence-00016.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=nmDuOtQXSZQtgOuPxxuaVDRT4Ljct9CEg7Ee54OA8qE%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;fr-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;French (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;fr-BE-GerardNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Il existe 2 manières de participer.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1018241597/TTS-GerardNeural-Waves-GeneralSentence-00036.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=T698dE7j4VlnIzFh%2Fxu%2BMMPjkOjAG6a5yCuSrT4Mtcs%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;nl-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Dutch (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;nl-BE-DenaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Hij is al urenlang online.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1041306573/TTS-DenaNeural-Waves-GeneralSentence-00008.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=43Wt1OVaATmHPCAhdBOsuJebK01KUV959Bfg%2Ft0giL8%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;nl-BE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Dutch (Belgium)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;nl-BE-ArnaudNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Ik vond vele kabouters in hun tuin.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1048103107/TTS-ArnaudNeural-Waves-GeneralSentence-00038.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=HmbZ58lyEUc57Tq6vwNOptr4avEoTc5d3HdLxt20ZuE%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;uk-UA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Ukrainian (Ukraine)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;uk-UA-PolinaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;У Києві завершили реставрацію Андріївської церкви.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0931272540/TTS-PolinaNeural-Waves-GeneralSentence-00042.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=cqZZm%2BwrPWhCXjrDS5UJQFP%2FTHGfDoFesOHVEhxdXhQ%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;uk-UA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Ukrainian (Ukraine)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;uk-UA-OstapNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P&gt;Загалом було оновлено 4 395 км доріг.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0936496995/TTS-OstapNeural-Waves-GeneralSentence-00012.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=Kc4hCGYi9j9fX4rbq%2FLi9Q%2F0DOu637zzYBbreRXAdaI%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;ur-PK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Urdu (Pakistan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;ur-PK-UzmaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P class="lia-align-right"&gt;واہ! کیا ہی خوبصورت نظارہ ہے۔&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0948509228/TTS-UzmaNeural-Waves-GeneralSentence-00017.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=FeW4%2FPk%2FUWHVPPV6dh6nTIze41cxNoUg3%2B7FgFmeE70%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="59px"&gt;
&lt;P&gt;ur-PK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98px"&gt;
&lt;P&gt;Urdu (Pakistan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="66px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="116px"&gt;
&lt;P&gt;ur-PK-AsadNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="283px"&gt;
&lt;P class="lia-align-right"&gt;سورج گرہن پاکستانی وقت کے مطابق شام 6 بج کر 34 منٹ پر شروع ہو گا۔&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0954494762/TTS-AsadNeural-Waves-GeneralSentence-00043.wav?sr=c&amp;amp;si=ReadPolicy&amp;amp;sig=0op69NuG02bH%2BgMOk7dCzCwW%2Fvl%2FJqyy4E29Aj73DoI%3D"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this update, Azure TTS now supports 60 languages in total. Check out the figure below for more details or see the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener noreferrer"&gt;full language list.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="GarfieldHe_0-1616656804430.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/266926iBAE9E05A59FF3DB9/image-size/large?v=v2&amp;amp;px=999" role="button" title="GarfieldHe_0-1616656804430.png" alt="GarfieldHe_0-1616656804430.png" /&gt;&lt;/span&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Five preview languages now GA&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Last November, we released 5 languages in preview with 10 voices for &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;European locales&lt;/A&gt;. Now these languages become generally available in all&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;Neural TTS regions/datacenters&lt;/A&gt;. Azure TTS now has full support for all 24 European languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;&lt;STRONG&gt;Audio samples&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;et-EE-AnuNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/et-EE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;et-EE- KertNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Ametlikku meetodit sellise pettuse avastamiseks ei olegi olemas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/et-EE%20Kert.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;ga-IE- OrlaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Tá an scoil sa mbaile ar oscailt arís inniu.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ga-IE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;ga-IE- ColmNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Ritheadh próiseas comhairliúcháin faoin scéal sa bhfómhar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/ga-IE%20Colm.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lt-LT- OnaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lt-LT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lt-LT- LeonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Aišku, anksčiau ar vėliau paaiškės tos priežastys.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lt-LT%20Leonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lv-LV-EveritaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lv-LV.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;lv-LV- NilsNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Aizvadīto gadu uzņēmums noslēdzis ar 6,3 miljonu eiro zaudējumiem.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lv-LV%20Nils.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;mt-MT-GraceNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/mt-MT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="39"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="55"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="60"&gt;
&lt;P&gt;mt-MT- JosephNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="286"&gt;
&lt;P&gt;Anki tfajjel tal-primarja jaf li l-popolazzjoni tikber fejn hemm il-prosperità.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/mt-MT%20Joseph.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;How to integrate with the new voices/languages&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure TTS now covers more languages of the world. Applications using Azure TTS can be easily updated to support coverage of additional countries. All the voices are available in the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech" target="_blank" rel="noopener"&gt;same API&lt;/A&gt;&amp;nbsp;and &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-cpp" target="_blank" rel="noopener"&gt;SDK&lt;/A&gt;. Developers can just edit the voice and locale list in their applications to use these new voices without code logic modifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For instance, &lt;A href="https://docs.microsoft.com/en-us/microsoftteams/create-a-phone-system-auto-attendant" target="_blank" rel="noopener"&gt;Microsoft Teams auto attendants&lt;/A&gt;&amp;nbsp;lets people call your organization and navigate a menu system to speak to the right department, call queue, person, or an operator. It uses Azure TTS to render customized prompts as a call response. To better localize audio prompts for different countries, Teams has been integrated with the new TTS languages to serve more customers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Want more languages or voices?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you find that the language which you are looking for is not supported by Azure TTS, reach out to your sales representative, or file a support ticket on Azure. We'd be happy to&amp;nbsp;engage and discuss how to support the languages you need. You can also customize and create a brand voice with your speech data for your apps using the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/custom-neural-voice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt; feature.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Tell us your experiences!&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let us know how you are using or plan to use Neural TTS voices in this &lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener"&gt;form&lt;/A&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and look forward to developing more compelling services together with you for the developers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Add voice to your app in 15 minutes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Explore the available voices in this demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk#optional-change-the-language-and-bot-voice" target="_blank" rel="noopener"&gt;Build a voice-enabled bot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;Deploy Azure TTS voices on prem with Speech Containers&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Build your custom voice&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Mar 2021 15:21:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2021-03-31T15:21:04Z</dc:date>
    </item>
    <item>
      <title>Azure Speech and Batch Ingestion</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539</link>
      <description>&lt;H1&gt;Getting started with Azure Speech and Batch Ingestion Client&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Batch Ingestion Client is as a zero-touch transcription solution for all your audio files in your Azure Storage. If you are looking for a quick and effortless way to transcribe your audio files or even explore transcription, without writing any code, then this solution is for you. Through an ARM template deployment, all the resources necessary to seamlessly process your audio files are set-up and set in motion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Why do I need this?&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Getting started with any API requires some amount of time investment in learning the API, understanding its scope, and getting value through trial and error. In order to speed up your transcription solution, for those of you that do not have the time to invest in getting to know our API or related best practices, we created an ingestion layer (a client for batch transcription) that will help you set-up a full blown, scalable and secure transcription pipeline without writing any code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is a smart client in the sense that it implements best practices and optimized against the capabilities of the Azure Speech infrastructure. It utilizes Azure resources such as Service Bus and Azure Functions to orchestrate transcription requests to Azure Speech Services from audio files landing in your dedicated storage containers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Before we delve deeper into the set-up instructions, let us have a look at the architecture of the solution this ARM template builds.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="architecture.png" style="width: 741px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265483iFB98720C64CE6685/image-size/large?v=v2&amp;amp;px=999" role="button" title="architecture.png" alt="architecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The diagram is simple and hopefully self-explanatory. As soon as files land in a storage container, the Grid Event that indicates the complete upload of a file is filtered and pushed to a Service bus topic. Azure Functions (time triggered by default) pick up those events and act, namely creating Tx requests using the Azure Speech Services batch pipeline. When the Tx request is successfully carried out an event is placed in another queue in the same service bus resource. A different Azure Function triggered by the completion event starts monitoring transcription completion status and copies the actual transcripts in the containers from which the audio file was obtained. This is it. The rest of the features are applied on demand. Users can choose to apply analytics on the transcript, produce reports or redact, all of which are the result of additional resources being deployed through the ARM template. The solution will start transcribing audio files without the need to write any code. If -however- you want to customize further this is possible too. The code is available in this &lt;A href="http://cognitive-services-speech-sdk/samples/batch%20at%20master · Azure-Samples/cognitive-services-speech-sdk (github.com)" target="_self"&gt;repo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The list of best practices we implemented as part of the solution are:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Optimized the number of audio files included in each transcription with the view of achieving the shortest possible SAS TTL.&lt;/LI&gt;
&lt;LI&gt;Round Robin around selected regions in order to distribute load across available regions (per customer request)&lt;/LI&gt;
&lt;LI&gt;Retry logic optimization to handle smooth scaling up and transient HTTP 429 errors&lt;/LI&gt;
&lt;LI&gt;Running Azure Functions economically, ensuring minimal execution cost&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Setup Guide&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following guide will help you create a set of resources on Azure that will manage the transcription of audio files.&lt;/P&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/" target="_blank" rel="noopener"&gt;Azure Account&lt;/A&gt;&amp;nbsp;as well as an&amp;nbsp;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank" rel="noopener"&gt;Azure Speech key&lt;/A&gt;&amp;nbsp;is needed to run the Batch Ingestion Client.&lt;/P&gt;
&lt;P&gt;Here are the detailed steps to create a speech resource:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;You need to create a Speech Resource with a paid (S0) key. The free key account will not work. Optionally for analytics you can create a Text Analytics resource too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Click on +Create Resource&lt;/LI&gt;
&lt;LI&gt;Type Speech and&lt;/LI&gt;
&lt;LI&gt;Click Create on the Speech resource.&lt;/LI&gt;
&lt;LI&gt;You will find the subscription key under&amp;nbsp;&lt;STRONG&gt;Keys&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;You will also need the region, so make a note of that too.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;To test your account, we suggest you use&amp;nbsp;&lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_blank" rel="noopener"&gt;Microsoft Azure Storage Explorer&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;The Project&lt;/H3&gt;
&lt;P&gt;Although you do not need to download or do any changes to the code you can still download it from GitHub:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;git clone https://github.com/Azure-Samples/cognitive-services-speech-sdkcd cognitive-services-speech-sdk/samples/batch/transcription-enabled-storage&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make sure that you have downloaded the&amp;nbsp;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/transcription-enabled-storage/Setup/ArmTemplate.json" target="_blank" rel="noopener"&gt;ARM Template&lt;/A&gt;&amp;nbsp;from the repository.&lt;/P&gt;
&lt;H2&gt;Batch Ingestion Client Setup Instructions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Click on&amp;nbsp;&lt;STRONG&gt;+Create Resource&lt;/STRONG&gt;&amp;nbsp;on&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&amp;nbsp;as shown in the following picture and type ‘&amp;nbsp;&lt;EM&gt;template deployment&lt;/EM&gt;&amp;nbsp;’ on the search box.&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image001.png" style="width: 986px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265484i95D5BC1C83CDF228/image-size/large?v=v2&amp;amp;px=999" role="button" title="image001.png" alt="image001.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 2. Click on&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;Button on the screen that appears as shown below.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;3. You will be creating the relevant Azure resources from the ARM template provided. Click on click on the ‘Build your own template in the editor’ link and wait for the new screen to load.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You will be loading the template via the&amp;nbsp;&lt;STRONG&gt;Load file&lt;/STRONG&gt;&amp;nbsp;option. Alternatively, you could simply copy/paste the template in the editor.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;Saving the template will result in the screen below. You will need to fill in the form provided. It is important that all the information is correct. Let us look at the form and go through each field.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image011.png" style="width: 640px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265489iC005F591513C0D18/image-size/large?v=v2&amp;amp;px=999" role="button" title="image011.png" alt="image011.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Please use short descriptive names in the form for your resource group. Long resource group names may result in deployment error&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;First pick the Azure Subscription Id within which you will create the resources.&lt;/LI&gt;
&lt;LI&gt;Either pick or create a resource group. [It would be better to have all the resources within the same resource group so we suggest you create a new resource group].&lt;/LI&gt;
&lt;LI&gt;Pick a region [May be the same region as your Azure Speech key].&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following settings all relate to the resources and their attributes&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Give your transcription enabled storage account a name [you will be using a new storage account rather than an existing one]. If you opt to use existing one then all existing audio files in that account will be transcribed too.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The following 2 steps are optional. Omitting them will result in using the base model to obtain transcripts. If you have created a Custom Speech model using &lt;A href="https://speech.microsoft.com/" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;, then:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter optionally your primary Acoustic model ID&lt;/LI&gt;
&lt;LI&gt;Enter optionally your primary Language model ID&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you want us to perform Language identification on the audio prior to transcription you can also specify a secondary locale. Our service will check if the language on the audio content is the primary or secondary locale and select the right model for transcription.&lt;/P&gt;
&lt;P&gt;Transcripts are obtained by polling the service. We acknowledge that there is a cost related to that. So, the following setting gives you the option to limit that cost by telling your Azure Function how often you want it to fire.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter the polling frequency [There are many scenarios where this would be required to be done couple of times a day]&lt;/LI&gt;
&lt;LI&gt;Enter locale of the audio [you need to tell us what language model we need to use to transcribe your audio.]&lt;/LI&gt;
&lt;LI&gt;Enter your Azure Speech subscription key and Locale information&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-SPOILER&gt;&lt;EM&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;If you plan to transcribe large volume of audio (say millions of files) we propose that you rotate the traffic between regions. In the Azure Speech Subscription Key text box you can put as many keys separated by column ';'. In is important that the corresponding regions (Again separated by column ';') appear in the Locale information text box. For example if you have 3 keys (abc, xyz, 123) for east us, west us and central us respectively then lay them out as follows 'abc;xyz;123' followed by 'east us;west us;central us'&lt;/LI-SPOILER&gt;
&lt;P&gt;The rest of the settings related to the transcription request. You can read more about those in our&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/batch-transcription" target="_blank" rel="noopener"&gt;docs&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select a profanity option&lt;/LI&gt;
&lt;LI&gt;Select a punctuation option&lt;/LI&gt;
&lt;LI&gt;Select to Add Diarization [all locales]&lt;/LI&gt;
&lt;LI&gt;Select to Add Word level Timestamps [all locales]&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Do you need more than transcription? Do you need to apply Sentiment to your transcript? Downstream analytics are possible too, with Text Analytics Sentiment and Redaction being offered as part of this solution too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to perform Text Analytics please add those credentials.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add Text analytics key&lt;/LI&gt;
&lt;LI&gt;Add Text analytics region&lt;/LI&gt;
&lt;LI&gt;Add Sentiment&lt;/LI&gt;
&lt;LI&gt;Add data redaction&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you want to further analytics we could map the transcript json we produce to a DB schema.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enter SQL DB credential login&lt;/LI&gt;
&lt;LI&gt;Enter SQL DB credential password&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can feed that data to your custom PowerBI script or take the scripts included in this repository. Follow this &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/batch-ingestion-client/PowerBI/guide.md" target="_self"&gt;guide&lt;/A&gt; for setting it up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Press&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;to trigger the resource creating process. It typically takes 1-2 mins. The set of resources are listed below.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;If a Consumption Plan (Y1) was selected for the Azure Functions, make sure that the functions are synced with the other resources (see&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/azure-functions/functions-deployment-technologies#trigger-syncing" target="_blank" rel="noopener"&gt;this&lt;/A&gt;&amp;nbsp;for further details).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To do so, click on your StartTranscription function in the portal and wait until your function shows up:&lt;/P&gt;
&lt;DIV id="tinyMceEditorPanos Periorellis_8" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;Do the same for the FetchTranscription function&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-SPOILER&gt;&lt;EM&gt;&lt;STRONG&gt;Important:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Until you restart both Azure functions you may see errors.&lt;/LI-SPOILER&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Running the Batch Ingestion Client&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upload audio files to the newly created audio-input container (results are added to json-result-output and test-results-output containers). Once they are done you can test your account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Use&amp;nbsp;&lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_blank" rel="noopener"&gt;Microsoft Azure Storage Explorer&lt;/A&gt;&amp;nbsp;to test uploading files to your new account. The process of transcription is asynchronous. Transcription usually takes half the time of the audio track to be obtained. The structure of your newly created storage account will look like the picture below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image015.png" style="width: 297px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265492i29661E0FD6C8203E/image-size/large?v=v2&amp;amp;px=999" role="button" title="image015.png" alt="image015.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are several containers to distinguish between the various outputs. We suggest (for the sake of keeping things tidy) to follow the pattern and use the audio-input container as the only container for uploading your audio.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Customizing the Batch Ingestion Client&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By default, the ARM template uses the newest version of the Batch Ingestion Client which can be found in this repository. If you want to customize this further clone the &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch" target="_self"&gt;repo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To publish a new version, you can use Visual Studio, right click on the respective project, click publish and follow the instructions.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN&gt;What to build next&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Now that you’ve successfully implemented a speech to text scenario, you can build on this scenario. Take a look at the insights&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/blog/using-text-analytics-in-call-centers/" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt; provides from the transcript like caller and agent sentiment, key phrase extraction and entity recognition.&amp;nbsp; If you’re looking specifically to solve for Call centre&amp;nbsp;transcription, review &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/call-center-transcription" target="_blank" rel="noopener"&gt;this docs page&lt;/A&gt; for further guidance&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 20:21:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539</guid>
      <dc:creator>Panos Periorellis</dc:creator>
      <dc:date>2021-03-30T20:21:49Z</dc:date>
    </item>
    <item>
      <title>Microsoft named a Leader in 2021 Gartner Magic Quadrant for Cloud AI Developer Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Gartner CAIDS MQ graphic 2021.png" style="width: 957px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265536i164046209606D6C1/image-size/large?v=v2&amp;amp;px=999" role="button" title="Gartner CAIDS MQ graphic 2021.png" alt="Gartner CAIDS MQ graphic 2021.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Gartner recently released their Magic Quadrant for 2021 Cloud AI Developer Services. Microsoft is in the Leaders quadrant and was positioned highest on the ability to execute axis. You can download a complimentary copy of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services&lt;/A&gt; for the full report. In this post, we’ll look at why, we think, Microsoft was placed in the Leaders quadrant.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;According to the report, “Gartner defines cloud AI developer services (CAIDS) as cloud-hosted or containerized services/models that allow development teams and business users to leverage artificial intelligence models via APIs, SDKs, or applications without requiring deep data science expertise.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;They specifically evaluated services with capabilities in language, vision, and automated machine learning. For Azure, this includes Azure Cognitive Services, Azure Machine Learning, and Microsoft’s conversational AI portfolio. For Power Platform, this includes AI Builder and Power Virtual Agents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;“Gartner believes that enterprise development teams will increasingly incorporate models built using AI and ML into applications. These services currently fall into three main functional areas: language, vision and automated machine learning (autoML). The language services include natural language understanding (NLU), conversational agent frameworks, text analytics, sentiment analysis and other capabilities. The vision services include image recognition, video content analysis and optical character recognition (OCR). The autoML services include automated tools that will let developers do data preparation, feature engineering, create models, deploy, monitor and manage models without having to learn data science.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure AI enables you to develop AI applications on your terms, apply AI responsibly, and deploy mission-critical AI solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Develop on your terms&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure AI allows you to build AI applications in your preferred software development language and deploy in the cloud, on-premises, or at the edge. Azure provides options for data scientists and developers of all skill levels – no machine learning expertise required. See the Microsoft section of the &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gartner.com%2Freprints%2F%3Fid%3D1-25C36W9W%26ct%3D210226%26st%3Dsb&amp;amp;data=04%7C01%7Cporourke%40microsoft.com%7C469a633697b348243cc408d8ea532151%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516990224565039%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=Cssf6FbIfAsj%2Bcae4zKvyhqavuiPL3IOckTgE8A4Wjc%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Apply AI responsibly&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure offers tools and resources to help you understand, protect, and control your AI solutions, including responsible ML toolkits, responsible bot development guidelines, tools to help you explain model behavior and test for fairness, and more. We never use your data to train our models, and we keep principles like inclusiveness, fairness, transparency, and accountability in mind at every stage of our AI research, development, and deployment. See the Microsoft section of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Deploy mission-critical solutions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure lets you access the same AI services that power products like Microsoft Teams and Xbox, and that are proven at global scale. Azure leads the industry when it comes to security, and we have the most comprehensive compliance coverage of any cloud service provider. We continue to innovate and our Microsoft Research team has made significant breakthroughs, most recently reaching human parity with &lt;A href="https://blogs.microsoft.com/ai/azure-image-captioning/" target="_blank" rel="noopener"&gt;image captioning&lt;/A&gt;. See the Microsoft section of the &lt;A href="https://www.gartner.com/reprints/?id=1-25C36W9W&amp;amp;ct=210226&amp;amp;st=sb" target="_blank" rel="noopener"&gt;Magic Quadrant for Cloud AI Developer Services.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whether you’re a professional developer or data scientist, or just getting started, we hope that you can use Azure AI services to build impactful AI-powered applications that solve complex problems and enhance customer experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 19 Mar 2021 16:17:37 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100</guid>
      <dc:creator>maddybutzbach</dc:creator>
      <dc:date>2021-03-19T16:17:37Z</dc:date>
    </item>
    <item>
      <title>The Tenets of Knowledge Management Adoption</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Knowledge Management Systems and Adoption Key Tenets:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Sonia M. Ang – CSA&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In today’s competitive business environment organizations need a clear roadmap that aligns with their training needs and focuses on both short-and long-term objectives.&amp;nbsp; Knowledge Management is a tool that can be implemented to identify and appeal to the training needs of the modern employee.&amp;nbsp; A successful Enterprise Learning system goes beyond the organizational level and allows employees access to information and knowledge, thus creating better alignment at the enterprise level.&amp;nbsp;&amp;nbsp; A well-designed Knowledge Management System can break down barriers by providing partners, clients, and customers with not only essential information and robust training, but also opportunities to promote and inform your organization’s products and services.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Employee retention and customer satisfaction are essential to any organizations long-term success.&amp;nbsp;&amp;nbsp; When considering your organization’s ROI, the benefits of Enterprise Learning are threefold:&amp;nbsp; retention (customers and employees), satisfaction, and improved profitability.&amp;nbsp; Enterprise Learning can be leveraged to provide better development and training opportunities thus promoting a feeling of empowerment amongst your team.&amp;nbsp; Enterprise Learning promotes efficiencies in training, in turn your organization will recognize the cost-saving benefits due to lower employee turnover and customer churn.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Enterprise Knowledge Management Adoption&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;1. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Management Sponsorship and a COE&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;An executive sponsor provides that critical link between executive leadership and project management and helps support projects successfully to their completion at their expected performance.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Sponsorship of the Enterprise Knowledge Management Project will enhance your product base and create opportunities for your company in the rapidly advancing Knowledge Management sector.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Executive sponsorship will align with our company’s strategy to be experts in Knowledge Management.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;  &lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Microsoft will be at the forefront of Knowledge Management as organizations rush to adopt more efficient and effective training strategies that better align to the modern worker's needs.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;2. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Beyond Training but Execution&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Training programs at the company level are often too extensive in complexity and the amount of learning material can be downright overwhelming.&amp;nbsp; Training professionals need to venture beyond traditional learning and utilize learning strategies that support the needs of today’s modern professionals.&amp;nbsp; Microlearning, for example, provides a host of benefits to your organization in terms of increased learner participation, memorability of courses, and quick deployment with easy updates to your digital learning assets.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Knowledge Management allows for learning concepts to be extracted from larger training programs and utilized as checklists and instructional videos that are easily accessible at a moment’s notice.&amp;nbsp; When learning material is successfully mined it allows the learning process to be refined, thus challenging concepts can be identified and made easier to process and understand.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;3. Collaborative and Social Learning&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;How does your organization support collaboration, problem-solving and the co-creation of knowledge?&amp;nbsp; A successful Enterprise Learning Strategy will push an organization forward by improving collaboration via robust communities and meaningful discussion forums.&amp;nbsp;&amp;nbsp; Building professional learning communities on platforms like Slack and Microsoft TEAMS can help break the walls down in organizations where ideas and knowledge may be siloed.&amp;nbsp; In addition to collaborative discussion platforms that give all community members a voice, the development of Expert Finders can be an important catalyst for creating a robust culture of collaboration.&amp;nbsp; Hidden ideas and knowledge will organically emerge from people in your organization that may hold previously unearthed niche expertise.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;4. &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Where is the Data?&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Generating data for analysis is the foundation of a robust Enterprise Learning strategy.&amp;nbsp; The key to success is to build data collection directly into technical systems supported by a centralized knowledge repository.&amp;nbsp; Data collection in the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;education sphere has traditionally focused on summative assessments like exams that are intended to measure a learner’s mastery of objectives.&amp;nbsp; Using the specifications in Enterprise Learning provides another set of metrics by allowing an organization to track formative assessments, such as data and social learning activity.&amp;nbsp; For example, these metrics allows an organization to collect new data, adding another layer of data to your knowledge repository to support the creation of more meaningful formative and summative assessments.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;5. Reusable Content and Reproducibility&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;How do organizations move past the traditional content development models where bulky training manuals were the norm?&amp;nbsp; Learning data and insights help organizations to build learning solutions that are reusable and provide 24/7 access to learning.&amp;nbsp; Additionally, a &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Headless CMS&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt; allows your organization’s content creators to move away from the rigid templates that most traditional learning management systems utilize. This means content creators have more control over the quality of their content, and this streamlines the process of creating unique digital learning experiences for both your employees and customers.&amp;nbsp; Perhaps your organization releases a short instructional video on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;company’s&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rules and regulations.&amp;nbsp; This learning asset has value as both a stand-alone content object in a knowledge base as well as a learning module in a more extensive communications course.&amp;nbsp; Content creators in your organization, ranging from instructional designers to marketing professionals, often create multiple versions of the same material.&amp;nbsp; Creating content that is reusable and not redundant is more efficient as you are not reinventing the wheel.&amp;nbsp; Additionally, you are not burdened with trying to maintain and keep up-to-date multiple versions of the same learning assets.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;6. “Findability” of Learning Assets: Digitization and Technology&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The successful implementation of KM tools can enhance the user experience through successful mining and classification of metadata.&amp;nbsp; The power of KM is powerful in that it provides a taxonomy, ontology, and a finely tuned search system. A well-designed metadata strategy will take advantage of your learning assets which may include courses, webinars, professional learning communities (PLCs) and subject matter experts in your organization.&amp;nbsp; This myriad of data exits across multiple systems in your organization. Using metadata that is contextualized and consistent will ensure that your data is findable.&amp;nbsp; Taking it one-step further ontologies can be tapped to support a complete network of shareable and reusable knowledge across a domain for each unique user.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Summary&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:312,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The adoption of an Enterprise Management system can reframe your organization’s knowledge and learning infrastructure.&amp;nbsp; Modern learners need to consume information quickly thus allowing them to efficiently apply and master essential skills and strategies.&amp;nbsp; A well-engineered Enterprise Learning Plan that focuses on empowering your community will result in higher learner engagement, enhanced workplace skills and a robust culture-of-knowledge across your organization.&amp;nbsp; Curiosity will drive success rather than traditional command and control training strategies.&amp;nbsp; Empower your community by giving them the autonomy to self-direct their own learning.&amp;nbsp; As a result, your organization will enjoy increased engagement, enhanced workforce skills and, in turn, a robust learning culture will grow and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;flourish&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 19 Mar 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091</guid>
      <dc:creator>Sonia Ang</dc:creator>
      <dc:date>2021-03-19T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Extract Data from PDFs using Form Recognizer with Code or Without!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299</link>
      <description>&lt;P&gt;Form Recognizer is a powerful tool to help build a variety of document machine learning solutions. It is one service however its made up of many prebuilt models that can perform a variety of essential document functions. You can even custom train a model using supervised or unsupervised learning for tasks outside of the scope of the prebuilt models! Read more about all the features of Form Recognizer&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/overview?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;. In this example we will be looking at how to use one of the prebuilt models in the Form Recognizer service that can extract the data from a PDF document dataset. Our documents are invoices with common data fields so we are able to use the prebuilt model without having to build a customized model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sample Invoice:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="invoice.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264367i59D7C6180F17623E/image-size/large?v=v2&amp;amp;px=999" role="button" title="invoice.png" alt="invoice.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After we take a look at how to do this with Python and Azure Form Recognizer, we will take a look at how to do the same process with no code using the Power Platform services: Power Automate and Form Recognizer built into AI Builder. In the Power Automate flow we are scheduling a process to happen every day. What the process does is look in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;blob container to see if there is new files to be processed. If there is new files to be processed it gets all blobs from the container and loops through each blob to extract the PDF data using a prebuilt AI builder step. Then it deletes the processed document from the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container. See what it looks like below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Power Automate Flow:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="flowaibuild.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264369i241F53F6E21A228F/image-size/large?v=v2&amp;amp;px=999" role="button" title="flowaibuild.png" alt="flowaibuild.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Prerequisites for Python&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Account&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3028733&amp;amp;WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Sign up here!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.anaconda.com/products/individual" target="_blank" rel="nofollow noopener"&gt;Anaconda&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and/or&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://code.visualstudio.com/Download?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;VS Code&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Basic programming knowledge&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;A id="user-content-prerequisites-for-power-automate" class="anchor" href="https://github.com/cassieview/FormRecognizer#prerequisites-for-power-automate" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Prerequisites for Power Automate&lt;/FONT&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Power Automate Account&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/power-automate/sign-up-sign-in/?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Sign up here!&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;No programming knowledge&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Process PDFs with Python and Azure Form Recognizer Service&lt;/FONT&gt;&lt;/H2&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-services" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-services" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create Services&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First lets create the Form Recognizer Cognitive Service.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Go to &lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;portal.azure.com&lt;/A&gt; to create the resource or click this&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" target="_blank" rel="nofollow noopener"&gt;link&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Now lets create a storage account to store the PDF dataset we will be using in containers. We want two containers, one for the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;PDFs and one for the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;unprocessed PDF.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/common/storage-account-create?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Azure Storage Account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Create two containers:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-upload-data" class="anchor" href="https://github.com/cassieview/FormRecognizer#upload-data" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Upload data&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Upload your dataset to the Azure Storage&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder since they need to be processed. Once processed then they would get moved to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The result should look something like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="storageaccounts.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264370i359B312F0A1B3D34/image-size/large?v=v2&amp;amp;px=999" role="button" title="storageaccounts.png" alt="storageaccounts.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Create Notebook and Install Packages&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we have our data stored in Azure Blob Storage we can connect and process the PDF forms to extract the data using the Form Recognizer Python SDK. You can also use the Python SDK with local data if you are not using Azure Storage. This example will assume you are using Azure Storage.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;Create a new&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://code.visualstudio.com/docs/python/jupyter-support#_create-or-open-a-jupyter-notebook?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Jupyter notebook in VS Code&lt;/A&gt;.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Install the Python SDK&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;!p&lt;SPAN class="pl-s1"&gt;ip&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;install&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;ai&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;formrecognizer&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;-&lt;/SPAN&gt;&lt;SPAN class="pl-s1"&gt;pre&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Then we need to import the packages.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;os&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;core&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;exceptions&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;ResourceNotFoundError&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;ai&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;formrecognizer&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;FormRecognizerClient&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;core&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;credentials&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;AzureKeyCredential&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;os&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;uuid&lt;/SPAN&gt;
&lt;SPAN class="pl-k"&gt;from&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;azure&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;storage&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt; &lt;SPAN class="pl-k"&gt;import&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;BlobServiceClient&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;BlobClient&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;ContainerClient&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;__version__&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-formrecognizerclient" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-formrecognizerclient" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create FormRecognizerClient&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Update the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;with the values from the service you created. These values can be found in the Azure Portal under the Form Recongizer service you created under the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Keys and Endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on the navigation menu.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-s1"&gt;endpoint&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;your endpoint&amp;gt;"&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;key&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;your key&amp;gt;"&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;We then use the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;endpoint&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to connect to the service and create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.formrecognizerclient?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;FormRecongizerClient&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-s1"&gt;form_recognizer_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;FormRecognizerClient&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;endpoint&lt;/SPAN&gt;, &lt;SPAN class="pl-v"&gt;AzureKeyCredential&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;key&lt;/SPAN&gt;))&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;print_results&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;helper function for use later to print out the results of each invoice.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-k"&gt;def&lt;/SPAN&gt; &lt;SPAN class="pl-en"&gt;print_result&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob_name&lt;/SPAN&gt;):
    &lt;SPAN class="pl-k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;idx&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;in&lt;/SPAN&gt; &lt;SPAN class="pl-en"&gt;enumerate&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;):
        &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"--------Recognizing invoice {}--------"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob_name&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"VendorName"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Vendor Name: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;vendor_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"VendorAddress"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Vendor Address: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;vendor_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerName"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Name: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_name&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerAddress"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Address: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_address&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"CustomerAddressRecipient"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Customer Address Recipient: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;customer_address_recipient&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceId"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Id: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_id&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceDate"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Date: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"InvoiceTotal"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Invoice Total: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;invoice_total&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))
        &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;invoice&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;fields&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"DueDate"&lt;/SPAN&gt;)
        &lt;SPAN class="pl-k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;:
            &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"Due Date: {} has confidence: {}"&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;format&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;value&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;due_date&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;confidence&lt;/SPAN&gt;))&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-connect-to-blob-storage" class="anchor" href="https://github.com/cassieview/FormRecognizer#connect-to-blob-storage" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Connect to Blob Storage&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Now lets&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-python?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;connect to our blob storage containers&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and create the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;BlobServiceClient&lt;/A&gt;. We will use the client to connect to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;containers that we created earlier.&lt;/LI&gt;
&lt;/UL&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-c"&gt;# Create the BlobServiceClient object which will be used to get the container_client&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;connect_str&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;"&amp;lt;Get connection string from the Azure Portal&amp;gt;"&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-v"&gt;BlobServiceClient&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;from_connection_string&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;connect_str&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Container client for raw container.&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get_container_client&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"raw"&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Container client for processed container&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;processed_container_client&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_service_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;get_container_client&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"processed"&lt;/SPAN&gt;)

&lt;SPAN class="pl-c"&gt;# Get base url for container.&lt;/SPAN&gt;
&lt;SPAN class="pl-s1"&gt;invoiceUrlBase&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;primary_endpoint&lt;/SPAN&gt;
&lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrlBase&lt;/SPAN&gt;)&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;HINT: If you get a "HttpResponseError: (InvalidImageURL) Image URL is badly formatted." error make sure the proper permissions to access the container are set. Learn more about Azure Storage Permissions&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/storage/common/storage-auth?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-extract-data-from-pdfs" class="anchor" href="https://github.com/cassieview/FormRecognizer#extract-data-from-pdfs" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Extract Data from PDFs&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are ready to process the blobs now! Here we will call&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;list_blobs&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to get a list of blobs in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;container. Then we will loop through each blob, call the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;begin_recognize_invoices_from_url&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to extract the data from the PDF. Then we have our helper method to print the results. Once we have extracted the data from the PDF we will&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;upload_blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;delete_blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from the raw folder.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight highlight-source-python"&gt;
&lt;PRE&gt;&lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s"&gt;"&lt;SPAN class="pl-cce"&gt;\n&lt;/SPAN&gt;Processing blobs..."&lt;/SPAN&gt;)

&lt;SPAN class="pl-s1"&gt;blob_list&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;list_blobs&lt;/SPAN&gt;()
&lt;SPAN class="pl-k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;in&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;blob_list&lt;/SPAN&gt;:
    &lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s"&gt;f'&lt;SPAN class="pl-s1"&gt;&lt;SPAN class="pl-kos"&gt;{&lt;/SPAN&gt;invoiceUrlBase&lt;SPAN class="pl-kos"&gt;}&lt;/SPAN&gt;&lt;/SPAN&gt;/&lt;SPAN class="pl-s1"&gt;&lt;SPAN class="pl-kos"&gt;{&lt;/SPAN&gt;blob.name&lt;SPAN class="pl-kos"&gt;}&lt;/SPAN&gt;&lt;/SPAN&gt;'&lt;/SPAN&gt;
    &lt;SPAN class="pl-en"&gt;print&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt;)
    &lt;SPAN class="pl-s1"&gt;poller&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;form_recognizer_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;begin_recognize_invoices_from_url&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoiceUrl&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Get results&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt; &lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt; &lt;SPAN class="pl-s1"&gt;poller&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;result&lt;/SPAN&gt;()

    &lt;SPAN class="pl-c"&gt;# Print results&lt;/SPAN&gt;
    &lt;SPAN class="pl-en"&gt;print_result&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;invoices&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;name&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Copy blob to processed&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;processed_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;upload_blob&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;.&lt;SPAN class="pl-s1"&gt;blob_type&lt;/SPAN&gt;, &lt;SPAN class="pl-s1"&gt;overwrite&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;=&lt;/SPAN&gt;&lt;SPAN class="pl-c1"&gt;True&lt;/SPAN&gt;)

    &lt;SPAN class="pl-c"&gt;# Delete blob from raw now that its processed&lt;/SPAN&gt;
    &lt;SPAN class="pl-s1"&gt;raw_container_client&lt;/SPAN&gt;.&lt;SPAN class="pl-en"&gt;delete_blob&lt;/SPAN&gt;(&lt;SPAN class="pl-s1"&gt;blob&lt;/SPAN&gt;)&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;P&gt;Each result should look similar to this for the above invoice example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="pythonresult.png" style="width: 546px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264371iA3116FA86E09C32D/image-dimensions/546x131?v=v2" width="546" height="131" role="button" title="pythonresult.png" alt="pythonresult.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The prebuilt invoices model worked great for our invoices so we don't need to train a customized Form Recognizer model to improve our results. But what if we did and what if we didn't know how to code?! You can still leverage all this awesomeness in AI Builder with Power Automate without writing any code. We will take a look at this same example in Power Automate next.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A id="user-content-use-form-recognizer-with-ai-builder-in-power-automate" class="anchor" href="https://github.com/cassieview/FormRecognizer#use-form-recognizer-with-ai-builder-in-power-automate" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Use Form Recognizer with AI Builder in Power Automate&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can achieve these same results using no code with Form Recognizer in AI Builder with Power Automate. Lets take a look at how we can do that.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-create-a-new-flow" class="anchor" href="https://github.com/cassieview/FormRecognizer#create-a-new-flow" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Create a New Flow&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Log in to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://flow.microsoft.com/" target="_blank" rel="nofollow noopener"&gt;Power Automate&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;then click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Scheduled Cloud Flow&lt;/CODE&gt;. You can trigger Power Automate flows in a variety of ways so keep in mind that you may want to select a different trigger for your project.&lt;/LI&gt;
&lt;LI&gt;Give the Flow a name and select the schedule you would like the flow to run on.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-connect-to-blob-storage-1" class="anchor" href="https://github.com/cassieview/FormRecognizer#connect-to-blob-storage-1" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Connect to Blob Storage&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;New Step&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;List blobs&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Step
&lt;UL&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;List blobs&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the ellipsis click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create new connection&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if your storage account isn't already connected
&lt;UL&gt;
&lt;LI&gt;Fill in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Connection Name&lt;/CODE&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Storage Account name&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(the account you created), and the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Storage Account Access Key&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which you can find in the resource keys in the Azure Portal)&lt;/LI&gt;
&lt;LI&gt;Then select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Create&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Once the storage account is selected click the folder icon on the right of the list blobs options. You should see all the containers in the storage account, select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;raw&lt;/CODE&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your flow should look something like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="connecttoblob.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264373iCE2CBD509DA1B8DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="connecttoblob.png" alt="connecttoblob.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Loop Through Blobs to Extract the Data&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click the plus sign to create a new step&lt;/LI&gt;
&lt;LI&gt;Click&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Control&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the textbox and a list of blob properties will appear. Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;value&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;LI&gt;Next select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;add action&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from within the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Flow step.&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Click the textbox and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property. This will get the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;File content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;that we will pass into the Form Recognizer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Process and save information from invoices&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Click the plus sign and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;add new action&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Process and save information from invoices&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the textbox and then the property&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;File Content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Get blob content&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;section&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Copy Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Repeat the add action steps&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Copy Blob&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Source url&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;text box and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Destination blob path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and put&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;/processed&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for the processed container&lt;/LI&gt;
&lt;LI&gt;Select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Overwrite?&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;dropdown and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Yes&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;if you want the copied blob to overwrite blobs with the existing name.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Add the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Delete Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;step:
&lt;UL&gt;
&lt;LI&gt;Repeat the add action steps&lt;/LI&gt;
&lt;LI&gt;Search for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Azure Blob Storage&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and select&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Delete Blob&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Blob&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;text box and select the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Path&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;property&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Apply to each&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;block should look something like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="applytoeachblock.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264375i8CB49A960730C7EF/image-size/large?v=v2&amp;amp;px=999" role="button" title="applytoeachblock.png" alt="applytoeachblock.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Save and Test the Flow
&lt;UL&gt;
&lt;LI&gt;Once you have completed creating the flow save and test it out using the built in test features that are part of Power Automate.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This prebuilt model again worked great on our invoice data. However if you have a more complex dataset, use the AI Builder to label and create a customized machine learning model for your specific dataset. Read more about how to do that&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/tutorial-ai-builder?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A id="user-content-conclusion" class="anchor" href="https://github.com/cassieview/FormRecognizer#conclusion" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;Conclusion&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We went over a fraction of the things that you can do with Form Recognizer so don't let the learning stop here! Check out the below highlights of new Form Recognizer features that were just announced and the additional doc links to dive deeper into what we did here.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;A id="user-content-additional-resources" class="anchor" href="https://github.com/cassieview/FormRecognizer#additional-resources" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Additional Resources&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/blog/new-features-for-form-recognizer-now-available/#:~:text=New%20features%20for%20Form%20Recognizer%20now%20available.%20Neta,tables%20from%20documents%20to%20accelerate%20their%20business%20processes." target="_blank" rel="nofollow noopener"&gt;New Form Recognizer Features&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/overview?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;What is Form Recognizer?&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=preview%2Cv2-1&amp;amp;pivots=programming-language-python?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Quickstart: Use the Form Recognizer client library or REST API&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/tutorial-ai-builder?WT.mc_id=aiml-14201-cassieb" target="_blank" rel="nofollow noopener"&gt;Tutorial: Create a form-processing app with AI Builder&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;AI Developer Resources page&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=TX7XwwIG5lw&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=5&amp;amp;t=6s" target="_self"&gt;AI Essentials video including Form Recognizer&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Mar 2021 16:43:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299</guid>
      <dc:creator>cassieview</dc:creator>
      <dc:date>2021-03-16T16:43:56Z</dc:date>
    </item>
    <item>
      <title>Model understanding with Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Mehrnoosh Sameki, Program Manager, Azure Machine Learning.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Overview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Model interpretability and fairness are part of the ‘Understand’ pillar of Azure Machine Learning’s Responsible ML offerings. As machine learning becomes ubiquitous in decision-making from the end-user utilizing AI-powered applications to the business stakeholders using models to make data-driven decisions, it is necessary to provide tools at scale for model transparency and fairness.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" style="width: 626px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262968iCD258909811687E8/image-size/large?v=v2&amp;amp;px=999" role="button" title="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;Explaining a machine learning model and performing fairness assessment is important for the following users:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data scientists and model evaluators - At training time to help them to understand their model predictions and assess the fairness of their AI systems, enhancing their ability to debug and improve models.&lt;/LI&gt;
&lt;LI&gt;Business stakeholders and auditors - To build trust with defined ML models and deploy them more confidently.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Customers like Scandinavian Airlines (SAS) and Ernst &amp;amp; Young (EY) put interpretability and fairness packages to the test to be able to deploy models more confidently.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://customers.microsoft.com/en-us/story/781802-sas-travel-transportation-azure-machine-learning" target="_blank" rel="noopener"&gt;SAS used interpretability to confidently identify fraud&lt;/A&gt; in its EuroBonus loyalty program. SAS data scientists could debug and verify model predictions using interpretability. They produced explanations about model behavior that gave stakeholders confidence in the machine learning models and assisted with meeting regulatory requirements.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://customers.microsoft.com/doclink/809460-ey-partner-professional-services-azure-machine-learning-fairlearn" target="_blank" rel="noopener"&gt;EY utilized fairness assessment and unfairness mitigation&lt;/A&gt; techniques with real mortgage adjudication data to improve the fairness of loan decisions from having an accuracy disparity of 7 percent between men and women to less than 0.5 percent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are releasing enhanced experiences and feature additions for the interpretability and fairness toolkits in Azure Machine Learning, to empower more ML practitioners and teams to build trust with AI systems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="6" color="#000000"&gt;Model understanding using interpretability and fairness toolkits&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These two toolkits can be used together to understand model predictions and mitigate unfairness. For this demonstration, we shall take a look at a loan allocation scenario. Let’s say that the label indicates whether each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Tech blog diagram.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262695i2806C3064A5DEAB3/image-size/large?v=v2&amp;amp;px=999" role="button" title="Tech blog diagram.jpg" alt="Tech blog diagram.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;FONT size="5"&gt;Identify your model's fairness issues&lt;/FONT&gt;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our revamped fairness dashboard can help uncover the harm of allocation which leads to the model unfairly allocating loans among different demographic groups. The dashboard can additionally uncover harm of quality of service which leads to a model failing to provide the same quality of service to some people as they do to others. Using the fairness dashboard, you can identify if our model treats different demographics of sex unfairly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Dashboard configurations&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When you first load the fairness dashboard, you need to configure it with desired settings, including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;selection of your sensitive demographic of choice (e.g., sex&lt;A href="#_ftn1" target="_self" name="_ftnref1"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&lt;/A&gt;)&lt;/LI&gt;
&lt;LI&gt;model performance metric (e.g., accuracy)&lt;/LI&gt;
&lt;LI&gt;fairness metric (e.g., demographic parity difference).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Model assessment view&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After setting the configurations, you will land on a model assessment view where you can see how the model is treating different demographic groups.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-fairness-toolkit/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation model’s fairness with the AzureML’s fairness toolkit - Microsoft Channel 9 Video"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our fairness assessment shows an 18.3% disparity in the selection rate (or demographic group difference). According to that insight, 18.3% more males are receiving qualifications for loan acceptance compared to females. Now that you’ve seen some unfairness indicators in your model, you can next use our interpretability toolkit to understand why your model is making such predictions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Diagnose your model’s predictions&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new revamped interpretability dashboard greatly improves the user experience of the previous dashboard. In the loan allocation scenario, you can understand how a model treats female loan applicants differently than male loan applicants using the interpretability toolkit:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-interpretability/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation with interpretability toolkit - Microsoft Channel 9 Video"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Dataset cohort creation:&lt;/STRONG&gt; You can slice and dice your data into subgroups (e.g., female vs. male vs. unspecified) and investigate or compare your model’s performance and explanations across them.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="font-family: inherit;"&gt;Model performance tab:&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; With the predefined female and male cohorts, we can observe the different prediction distributions between males and female cohorts, with females experiencing higher probability rates of being rejected for a loan.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dataset explorer tab:&lt;/STRONG&gt; Now that you have seen in the model performance tab how females are rejected at a higher rate than males, you can use the data explorer tab to observe the ground truth distribution between males and females. &amp;nbsp;For males, the ground truth data is well balanced between those receiving a rejection or approval whereas, for females, the ground truth data is heavily skewed towards rejection thereby explaining how the model could come to associate the label ‘female’ with rejection.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Aggregate feature importance tab:&lt;/STRONG&gt; Now we observe which top features contribute to the model’s overall prediction (also called global explanations) towards loan rejection. We sort our top feature importances by the Female cohort, which indicates that while the feature for “Sex” is the second most important feature to contribute towards the model’s predictions for individuals in the female cohort, they do not influence how the model makes predictions for individuals in the male cohort. The dependence plot for the feature “Sex” also shows that only the female group has positive feature importance towards the prediction of being rejected for a loan, whereas the model does not look at the feature “Sex” for males when making predictions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Individual feature importance &amp;amp; What-If tab:&lt;/STRONG&gt; Drilling deeper into the model’s prediction for a specific individual (also called local explanations), we look at the individual feature importances for only the Female cohort. We select an individual who is at the threshold of being accepted for a loan by the model and observe which features contributed towards her prediction of being rejected. “Sex” is the second most important feature contributing towards the model prediction for this individual. The Individual Conditional Expectation (ICE) plot calculates how a perturbation for a given feature value across a range can impact its prediction. We select the feature “Sex” and can see that if this feature had been flipped to male, the probability of being rejected is lowered drastically. We create a new hypothetical What-If point from this individual data point and switch only the “Sex” from female to male, and observe that without changing any other feature related to financial competency, the model now predicts that this individual will have their loan application accepted.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Once some potential fairness issues are observed and diagnosed, you can move to mitigate those unfairness issues.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Mitigate unfairness issues in your model&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The unfairness mitigation part is powered by the &lt;A href="http://fairlearn.org" target="_blank" rel="noopener"&gt;Fairlearn&lt;/A&gt; open-source package which includes two types of mitigation algorithms: &lt;A href="https://arxiv.org/pdf/1610.02413.pdf" target="_blank" rel="noopener"&gt;postprocessing algorithms&lt;/A&gt; (&lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.ThresholdOptimizer" target="_blank" rel="noopener"&gt;ThresholdOptimizer&lt;/A&gt;) and &lt;A href="https://arxiv.org/pdf/1803.02453.pdf" target="_blank" rel="noopener"&gt;reduction algorithms&lt;/A&gt; (&lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch" target="_blank" rel="noopener"&gt;GridSearch&lt;/A&gt;, &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.ExponentiatedGradient" target="_blank" rel="noopener"&gt;ExponentiatedGradient&lt;/A&gt;). Both operate as “wrappers” around any standard classification or regression algorithm. &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch" target="_blank" rel="noopener"&gt;GridSearch&lt;/A&gt;, for instance, treats any standard classification or regression algorithm as a black box, and iteratively (a) re-weight the data points and (b) retrain the model after each re-weighting. After 10 to 20 iterations, this process results in a model that satisfies the constraints implied by the selected fairness metric while maximizing model performance. &lt;A href="https://fairlearn.github.io/v0.5.0/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.ThresholdOptimizer" target="_blank" rel="noopener"&gt;ThresholdOptimizer&lt;/A&gt; on the other hand takes as its input a scoring function that underlies an existing classifier and identifies a separate threshold for each group to optimize the performance metric, while simultaneously satisfying the constraints implied by the selected fairness metric.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The fairness dashboard also enables the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches. Bypassing the dominated models of GridSearch for instance, you can see the unmitigated model on the upper right side (with the highest accuracy and highest demographic parity difference) and can click on any of the mitigated models to observe them further. This allows you to examine trade-offs between performance and fairness.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="model fairness comparison.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262696i74D857109F63B0D3/image-size/large?v=v2&amp;amp;px=999" role="button" title="model fairness comparison.png" alt="model fairness comparison.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Comparing results of unfairness mitigation&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After applying the unfairness mitigation, we go back to the interpretability dashboard and compare the unmitigated model with the mitigated model. In the figure below, we see a more even probability distribution for the female cohort for the mitigated model on the right:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model interpretability before after.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262697i9D6C42F08A512188/image-size/large?v=v2&amp;amp;px=999" role="button" title="Model interpretability before after.jpg" alt="Model interpretability before after.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Revisiting the fairness assessment dashboard, we also see a drastic decrease in demographic parity difference from 18.8% (unmitigated model) to 0.412% (mitigated model):&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model fairness before after.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262698i60AA2DC0494DE9BF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Model fairness before after.jpg" alt="Model fairness before after.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Saving model explanations and fairness metrics to Azure Machine Learning Run History&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning’s (AzureML) interpretability and fairness toolkits can be run both locally and remotely. If run locally, the libraries will not contact any Azure services. Alternatively, you can run the algorithms remotely on AzureML compute and log all the explainability and fairness information into AzurML’s run history via the AzureML SDK to save and share them with other team members or stakeholders in AzureML studio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AML explanation.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262699i455620FEE621210A/image-size/large?v=v2&amp;amp;px=999" role="button" title="AML explanation.png" alt="AML explanation.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure ML’s Automated ML supports explainability for its best model as well as on-demand explainability for any other models generated by Automated ML.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Learn more&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/responsible-ai" target="_blank" rel="noopener"&gt;Explore this scenario&lt;/A&gt; and other sample notebooks in the Azure Machine Learning sample notebooks GitHub.&lt;/P&gt;
&lt;P&gt;Learn more about the &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml" target="_blank" rel="noopener"&gt;Responsible ML offerings in Azure Machine Learning&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability" target="_blank" rel="noopener"&gt;interpretability&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml" target="_blank" rel="noopener"&gt;fairness&lt;/A&gt; concepts and see documentation on how-to guides for using &lt;A href="https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability" target="_blank" rel="noopener"&gt;interpretability&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-fairness-aml" target="_blank" rel="noopener"&gt;fairness&lt;/A&gt; in Azure Machine Learning.&lt;/P&gt;
&lt;P&gt;Get started with a &lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;free trial of the Azure Machine Learning service&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="#_ftnref1" target="_blank" rel="noopener" name="_ftn1"&gt;&lt;SPAN&gt;[1]&lt;/SPAN&gt;&lt;/A&gt; This dataset is from the 1994 US Census Bureau Database where “sex” in the data was limited to binary categorizations.&lt;/P&gt;</description>
      <pubDate>Tue, 16 Mar 2021 19:05:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141</guid>
      <dc:creator>mithigpe</dc:creator>
      <dc:date>2021-03-16T19:05:14Z</dc:date>
    </item>
    <item>
      <title>Advance Resource Access Governance for AML</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520</link>
      <description>&lt;DIV class="lia-message-subject-wrapper lia-component-subject lia-component-message-view-widget-subject-with-options"&gt;
&lt;DIV class="MessageSubject"&gt;
&lt;DIV class="MessageSubjectIcons "&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-message-body-wrapper lia-component-message-view-widget-body"&gt;
&lt;DIV id="bodyDisplay" class="lia-message-body"&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;Access control is a fundamental building block for enterprise customers, where protecting assets at various levels is absolutely necessary to ensure that only the relevant people with certain positions of authority are given access with different privileges. This is more so prevalent in machine learning, where data is absolutely essential in building ML models, and companies are highly cautious about how the data is accessed and managed, especially with the introduction of GDPR.&amp;nbsp; We are seeing an increasing number of customers seeking for explicit control of not only the data, but various stages of the machine learning lifecycle, starting from experimentation and all the way to operationalization. Assets such as generated models, cluster creation and model deployment require to be governed to ensure that controls are in line with the company’s policy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Azure traditionally provides Role-based Access Control [1], which helps to manage access to resources; who can access these and what they can access.&amp;nbsp; This is primarily achieved via the concept of roles.&amp;nbsp; A role defines a collection of permissions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Existing Roles in AML&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning provides three roles [3] for enterprise customers to provision as a coarse-grained access control, which is designed for simplicity in mind.&amp;nbsp; The first role (Owner) has the highest level of privileges, that grants full control of the workspace. &amp;nbsp;This is followed by a Contributor, which is a bit more restricted role that prevents users from changing role assignment. Reader having the most restrictive permissions and is typically read or view only (see figure 1 below).&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rbac-3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262754iAB96302DF84B6F1D/image-size/large?v=v2&amp;amp;px=999" role="button" title="rbac-3.png" alt="rbac-3.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;Figure 1 - Existing AML roles&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;What we have found with the customers is that while Coarse-grained Access Control immensely simplifies the management of the roles, and works quite well with a small team, primarily working in the experimentation environment.&amp;nbsp; However, when a company decides to operationalize the ML work, especially in the enterprise space, these roles become far too broad, and too simplistic.&amp;nbsp;&amp;nbsp; In the enterprise space, the deployment tends to have several stages (such as dev, test, pre-prod, prod, etc.), and require various skillset (data scientist, data engineer, etc.) with a greater control in each stage.&amp;nbsp; For example, a Data Scientist may not operate in the production environment. A Data Engineer can only provision resources and should not have the ability to commission and decommission training clusters. Such governance policies are crucial for companies to be enforced and monitored to maintain integrity of their business and IT processes.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Unfortunately, such requirements cannot be captured with the existing roles. Enterprise needs a better mechanism to define policies for various assets in AML to satisfy their business specific requirements.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This is where the new exciting feature of advanced Role-based Access Control really shines. It is based on Fine-grained Access Control at component level (see figure 2) with a number of pre-built out of the box roles, plus the ability to create custom roles that can capture more complex governance access processes and enforce them. &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Advance Fine-grained Role-based Access Control&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new advance Role-based Access Control feature of AML is really going to solve many of the enterprise problems around the ability to restrict or grant user permissions for various components.&amp;nbsp; Azure AML currently defines 16 components&amp;nbsp; with varying permissions.&lt;/P&gt;
&lt;BR /&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aml-components.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260370iC2929C8E66E43458/image-size/large?v=v2&amp;amp;px=999" role="button" title="aml-components.png" alt="aml-components.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 2 - Components Level RBAC&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Each component defines a list of actions such as read, write, delete, etc.&amp;nbsp; These actions can then be amalgamated together to create a custom specific role. To illustrate this with an example of a list of actions currently available for a Datastore component (see Figure 3 below).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="policy-1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262756iB724D52251F4900B/image-size/large?v=v2&amp;amp;px=999" role="button" title="policy-1.png" alt="policy-1.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 3 - Datastore Actions&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A datastore along with Dataset are important concepts in Azure Machine Learning, &amp;nbsp;since they provide access to various data sources, with lineage and tracking ability.&amp;nbsp; Many enterprises have built global Datalake that contain terabytes of data which can contain highly sensitive information. Companies are quite protective of who can access these data, along with various business justifications for how these data are being accessed/used. It is therefore imperative that a tighter access control is mandated for a specific role, such as a Data Engineer to accomplish such a task.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fortunately, AML advance access control provide custom roles.&amp;nbsp; to cater for their company specific access control, which may be a hybrid of these roles.&amp;nbsp; For such requirements, Azure caters for custom roles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Custom Role&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom role [4] allows creation of Fine-grained Access Control on various components, such as the workspace, datastore, etc.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Can be any combination of data or control plane actions that AzureML+AISC support.&lt;/LI&gt;
&lt;LI&gt;Useful for creating scoped roles to a specific action like an MLOps Engineer&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These controls are defined in a JSON policy definition, for example.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "Name": "Data Scientist",
    "IsCustom": true,
    "Description": "Can run experiment but can't create or delete datastore.",
    "Actions": ["*"],
    "NotActions": [
        "Microsoft.MachineLearningServices/workspaces/*/delete",
        "Microsoft.MachineLearningServices/workspaces/ datastores/write",
        "Microsoft.MachineLearningServices/workspaces/ datastores /delete",
        “Microsoft.MachineLearningServices/workspaces/datastores/write”,
        "Microsoft.Authorization/*/write"
    ],
    "AssignableScopes": [
        "/subscriptions/&amp;lt;subscription_id&amp;gt;/resourceGroups/&amp;lt;resource_group_name&amp;gt;/providers/Microsoft.MachineLearningServices/workspaces/&amp;lt;workspace_name&amp;gt;"
    ]
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The above code defines a Data Scientist who can run an experiment but cannot create or delete a Datastore. This role can be created using the Azure CLI (az role definition create -role-definition filename), however, the CLI ML extension needs to be installed first. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--1063417684"&gt;Role Operation Workflow&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In an organization, the following activities are to be undertaken by various role owners.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sub admin comes in for an enterprise and requests Amlcompute quota&lt;/LI&gt;
&lt;LI&gt;They create an RG and a workspace for a specific team, and also set workspace level quota&lt;/LI&gt;
&lt;LI&gt;The team lead (aka workspace admin), comes in and starts creating compute within the quota that the sub admin defined for that workspace&lt;/LI&gt;
&lt;LI&gt;Data Scientist comes in and uses the compute that workspace admin created for them (clusters or instances).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1424095149"&gt;Roles for Enterprise&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;AML provides a single environment for doing end-to-end experimentation to operationalization.&amp;nbsp; For a start-up this is really useful as they tend to operate in a very agile manner, where many iterations can happen in a short period of time and having the ability to quickly move from ideation to production really reduces their cycle time.&amp;nbsp; Unfortunately, this may not be the case for the enterprise customers, where they would typically be using either two or three environments to carry out their production workload such as: Dev, QA and Prod.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Dev is used to do the experimentation, while QA is catered for satisfying various functional and non-functional requirements, followed by Prod for deployment into the production for consumer usage.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The environments would also have various roles to carry out different activities, such as Data Scientist, Data Engineer and MLOps Engineer (see figure 8 below).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="role-4.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262758iDDE94C2D4806F6B4/image-size/large?v=v2&amp;amp;px=999" role="button" title="role-4.png" alt="role-4.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 8 - Enterprise Roles&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Data Scientist normally operates in the Dev environment and has full access to all the permissions related to carrying out experiments, such as provisioning training clusters, building models, etc. While some permissions are granted in the QA environment, primarily related model testing and performance, and very minimal access to the Prod environment, mainly telemetry (see below Table 1).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A Data Engineer on the other hand primarily operates in the Build and QA environment. The main focus is related to the data handling, such as data loading, doing some data wrangling, etc.&amp;nbsp; They have restricted access in the Prod environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_10-1614737951507.png" style="width: 864px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260367iECC3583AFC821F5F/image-dimensions/864x345?v=v2" width="864" height="345" role="button" title="Mufajjul_Ali_10-1614737951507.png" alt="Mufajjul_Ali_10-1614737951507.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Table 1 - Role/environment Matrix&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An MLOps Engineer has some permission in the Dev environment, but full permissions in the QA and Prod.&amp;nbsp; This is because an MlOPs Engineer is tasked with building the pipeline, gluing things together, and ultimately deploying models in production.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The interesting part is how do all these roles and environments and other components fit together in Azure to provide the much-needed access governance for the enterprise customers.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--517044038"&gt;Enterprise AML Roles Deployment&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It is impressive for enterprises to be able to model these complex roles/environments mapping as shown in Table one.&amp;nbsp; Fortunately these can be achieved in Azure using a combination of AD groups, roles and resource groups.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_11-1614737951524.png" style="width: 724px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260368i241720729954A0AF/image-dimensions/724x470?v=v2" width="724" height="470" role="button" title="Mufajjul_Ali_11-1614737951524.png" alt="Mufajjul_Ali_11-1614737951524.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;Figure 9 - Enterprise AML Roles Deployment&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fundamentally, Azure Active Directory groups play a major part in gluing all these components together to make it functional.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First step is to group the users specific to role(s) in a “Role AD group” for a given persona (DS, DE, etc.,). Then assign roles with various RBAC actions (Data Writer, MLContributor, etc.) to this AD group.&amp;nbsp; All these users will now inherit the permissions specific to this role(s).&amp;nbsp; Multiple AD groups will be created for different persona roles.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Separate AD groups (‘AD group for Environment’) are created for each environment (i.e. Dev, QA and Prod), the Role AD Groups are added to these Environment AD groups.&amp;nbsp; This creates a mapping of users belonging to a specific role persona with given permissions to an environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The ‘AD group for Environment’ is then assigned to a resource group, which contains a specific AML Workspace.&amp;nbsp; This ensures that the role permissions assigned to users will be enforced at the workspace level.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we have discussed the new advance Role-based Access Control, and how it is being applied in a complex enterprise with various environments with different user personas.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The important point to note is the flexibility that comes with this new feature which can operate at any of the 16 AML components and be able to define Fine-grained Access Control for each through custom roles, and out of box four roles which should be sufficient for the majority of the customers.&amp;nbsp;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2 id="toc-hId-1970468795"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;References&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;[1]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/role-based-access-control/overview" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/role-based-access-control/overview&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[2]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-gb/services/machine-learning/" target="_blank" rel="noopener noreferrer"&gt;https://azure.microsoft.com/en-gb/services/machine-learning/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[3]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[4]&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles" target="_blank" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additional Links:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;A tabindex="-1" title="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" target="_blank" rel="noreferrer noopener"&gt;https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;co-author:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/195402" target="_blank" rel="noopener"&gt;@Nishank Gupt and @John Wu&lt;/A&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 11 Mar 2021 09:28:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520</guid>
      <dc:creator>mufy</dc:creator>
      <dc:date>2021-03-11T09:28:08Z</dc:date>
    </item>
    <item>
      <title>Improving collaboration and productivity in Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Sharon Xu Program Manager, Azure Notebooks.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today we are very proud to announce the next set of productivity features and improvements for the notebook experience. Since &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009" target="_blank" rel="noopener"&gt;we announced the GA release&lt;/A&gt; of Notebooks in Azure Machine Learning (Azure ML), &lt;SPAN&gt;we have learned a lot from our customers&lt;/SPAN&gt;. Over the past few months, we have incrementally improved the notebook experience while simultaneously contributing back to &lt;A href="https://devblogs.microsoft.com/python/bringing-the-power-of-the-monaco-editor-to-nteract/" target="_blank" rel="noopener"&gt;the open source nteract project&lt;/A&gt;. The Azure ML team recently released a robust set of new functionalities designed to improve data scientist productivity and collaboration in Azure ML Notebooks.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Data scientist &amp;amp; Developer Productivity&lt;/H2&gt;
&lt;P&gt;We have spoken to several data scientists and developers to fully understand the additional features needed to improve productivity while developing machine learning projects. From feedback, we have found that users constantly needed the following enhancements to speed up their workflow: a clear indication that a cell has finished running, a way to templatize common code excerpts, a way to check variable contents, and more. The following list is a culmination of the most highly requested productivity features:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Cell Status Bar. The status bar located in each cell indicates the cell state: whether a cell has been queued, successfully executed, or run into an error. The status bar also displays the execution time of the last run.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#explore-variables-in-the-notebook" target="_blank" rel="noopener"&gt;Variable Explorer.&lt;/A&gt; provides a quick glance into the data type, size, and contents of your variables and dataframes, allowing for quicker and simpler debugging.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_5-1614125127829.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257237iB8D804F9493E69DB/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_5-1614125127829.png" alt="abeomor_5-1614125127829.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 1: (1) Cell status bar (2) Variable explorer&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Notebook snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_4-1614125123189.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257236iFDAB2F193BD4424E/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_4-1614125123189.png" alt="abeomor_4-1614125123189.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 2 (1) Notebook snippets panel, showing all useful snippets&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/visualstudio/intellicode/overview" target="_blank" rel="noopener"&gt;IntelliCode&lt;/A&gt;. IntelliCode provides intelligent auto-completion suggestions using an ML algorithm that analyzes the context of your notebook code. IntelliCode suggestions are designated with a star.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_3-1614125118045.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257235i8B5BCB3FB1176BA5/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_3-1614125118045.png" alt="abeomor_3-1614125118045.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 3: IntelliCode in Azure ML Notebooks&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Keyboard shortcuts with full Jupyter parity. Azure ML now supports all the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#useful-keyboard-shortcuts" target="_blank" rel="noopener"&gt;keyboard shortcuts available in Jupyter&lt;/A&gt; and more.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#navigate-with-a-toc" target="_blank" rel="noopener"&gt;Table of Contents.&lt;/A&gt; For large notebooks, the Table of Contents panel then allows you to navigate to the desired section. The sections of the notebook are designated by the Markdown headers.&lt;/LI&gt;
&lt;LI&gt;Markdown Side-by-side Editor in Notebooks. Within each notebook, the new side-by-side editor allows you to view the rendered results of your Markdown cells directly in your notebook editing.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_2-1614125111883.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257234iAE1377BE0412F326/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_2-1614125111883.png" alt="abeomor_2-1614125111883.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 4: &amp;nbsp;(1) Table of content pane (2) Markdown side by side&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Collaboration and Sharing&lt;/H2&gt;
&lt;P&gt;An increasing number of data scientists and developers are creating notebooks collaboratively and sharing these notebooks across their team We heard feedback that most users feel like they are missing adequate tools to edit notebooks simultaneously or share their notebooks with a broader audience. Users often resort to screen shares and calls to complete or present work within a notebook. We recently just release a few new features to help combat some of these issues:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Co-editing (preview). Co-editing makes collaboration easier than ever. The notebook can now be shared by sending the notebook URL, allowing multiple users to edit the notebook in real-time.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_1-1614125073756.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257232i6D39A594790A76B9/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_1-1614125073756.png" alt="abeomor_1-1614125073756.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 5: Live Co-editing in Azure ML&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#export-a-notebook" target="_blank" rel="noopener"&gt;Export Notebook as Python, LaTeX or HTML&lt;/A&gt;. When you feel satisfied with the results from your notebook and ready to present to your colleagues, you can export the notebook to various formats for easy sharing. LaTeX, HTML, and .py are currently supported.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_0-1614125062082.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257231iFD15BDAC14ECD764/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_0-1614125062082.png" alt="abeomor_0-1614125062082.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 6: Export Notebooks as Python and more in Azure ML&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Get Started Today&lt;/H2&gt;
&lt;P&gt;To begin using these features in Azure ML Notebooks, you will first need to &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?tabs=python" target="_blank" rel="noopener"&gt;create an Azure Machine Learning&lt;/A&gt;. Your Azure ML workspace serves as your one-stop-shop for all your machine learning needs, where you can create and share all your machine learning assets.&lt;/P&gt;
&lt;P&gt;Once you have your workspace set up, you can get started using &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks" target="_blank" rel="noopener"&gt;all the features in the Azure ML Notebooks experience&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt; The notebooks experience aims to provide you with an integrated suite of data science tools. Users can start working with a highly productive and collaborative Jupyter notebook editor directly in their workspace as well as quickly access other ML assets such as experiment details, datasets, models, and more.&lt;/P&gt;
&lt;P&gt;With the addition of this host of features, notebooks in Azure ML aims to improve every aspect of your development needs – collaboration, code editing, debugging. Give these features a try and &lt;A href="https://www.surveymonkey.com/r/D9RHYPV?hostName=ml.azure" target="_self"&gt;leave your feedback&lt;/A&gt;. The feedback provided by our community is what drives us to improve and build new features.&amp;nbsp; As we continue to push out new releases, keep an eye out, because the team has a few more exciting features coming out soon.&lt;/P&gt;</description>
      <pubDate>Wed, 10 Mar 2021 18:28:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906</guid>
      <dc:creator>abeomor</dc:creator>
      <dc:date>2021-03-10T18:28:13Z</dc:date>
    </item>
    <item>
      <title>Integrating AI: Prototyping a No-Code solution with Power Apps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-prototyping-a-no-code-solution-with-power-apps/ba-p/2189550</link>
      <description>&lt;P&gt;&lt;SPAN data-key="598477b2276e441ba5d5f43dc3367887"&gt;You might have the cutting edge AI features but it is hard to know how useful it will be before letting your users beta test your prototype. You can &lt;STRONG&gt;build fast&lt;/STRONG&gt;, &lt;STRONG&gt;deploy&lt;/STRONG&gt; and &lt;STRONG&gt;deliver&lt;/STRONG&gt; your app and iterate without writing any code, using &lt;A href="https://powerapps.microsoft.com/en-us/ai-builder/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;AI Builder&lt;/A&gt; and &lt;A href="https://powerplatform.microsoft.com/en-us/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;This article explains what&amp;nbsp;&lt;A title="Power Platform Overview" href="https://powerplatform.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt;&amp;nbsp;is, as well as go through a &lt;STRONG&gt;step by step&lt;/STRONG&gt; process to create an application that detects objects from photos using &lt;A title="Explore Power Apps for free for 30 Days" href="https://docs.microsoft.com/en-us/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Apps&lt;/STRONG&gt;&lt;/A&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;&lt;A title="Use AI Builder in Power Apps" href="https://docs.microsoft.com/powerapps/use-ai-builder?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;AI Builder&lt;/A&gt;. &lt;/STRONG&gt;Check out the video below to see the app we will build to detect different &lt;A title="What is Mixed Reality" href="https://docs.microsoft.com/windows/mixed-reality/discover/mixed-reality?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Mixed Reality&lt;/A&gt; Headsets such as HoloLens version 1 and 2 Augmented Reality and Virtual Reality headsets and their hand controllers.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Yonet_0-1615005261042.gif" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261357iF46BD002163A4704/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Yonet_0-1615005261042.gif" alt="Yonet_0-1615005261042.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="0cf5dd6b842746298eb653a9ef54a55a"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;What is Power Platform?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;&lt;A href="https://powerplatform.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt; is a set of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;tools,&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;API&lt;/STRONG&gt;'s and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;SDK&lt;/STRONG&gt;'s that helps you &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;analyze your data&lt;/STRONG&gt; and build &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;automations,&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;applications&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;virtual agents &lt;/STRONG&gt;with or without having to write any code.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="6ea2b24ab6c740399104f0737a1cbf7e"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerPlatform.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231026iEAFC2816C368547F/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerPlatform.png" alt="powerPlatform.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;What are Power Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;A title="Power Apps " href="https://powerapps.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Power Apps&lt;/A&gt;, allows you to create applications with a drag and drop UI and easy integration of your data and 3rd party APIs through connectors.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBhcmUlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;A &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="8fd8f66effb84ecab4f17ad1733a3956"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;connector&lt;/STRONG&gt;&lt;/A&gt; is a proxy or a wrapper around an API that allows the underlying service to talk to Microsoft Power Automate, Microsoft Power Apps, and Azure Logic Apps. It provides a way for users to connect their accounts and leverage a set of pre-built &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;actions&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;triggers&lt;/STRONG&gt; to build their apps and workflows. For example, you can use the&amp;nbsp;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/twitter/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="605f68c94edb4795a3483232cf113704"&gt;Twitter connector&lt;/A&gt; to get tweet data and visualize it in a dashboard or use the&amp;nbsp;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/twilio/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="6758248276c04f958e3929872b0dd8f3"&gt;Twilio connector&lt;/A&gt; to send your users text messages without having to be an expert in Twitter or Twilio APIs or having to write a line of code.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;Check out the&lt;A href="https://docs.microsoft.com/en-us/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="153c605aa3bd4f93b3f8915b02fae951"&gt; list of connectors for Power Apps&lt;/A&gt; to see all the APIs that are available. Notice &lt;A href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-powerautomate-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="e1fb307473f54ee386f290577633f8dc"&gt;Power Automate&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-logicapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="b78c6ee33cfb420ea048aa6d91d0dba4"&gt;Logic App connectors&lt;/A&gt; might not be the same.&lt;/EM&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBpcyUyMEFJJTIwQnVpbGRlciUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;&lt;SPAN style="color: inherit; font-size: 18px;"&gt;What is AI Builder?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoYXQlMjBpcyUyMEFJJTIwQnVpbGRlciUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;A href="https://powerapps.microsoft.com/en-us/ai-builder/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt;&lt;/A&gt; is one of the additional features of Power Apps. With AI Builder, you can &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;add intelligence to your apps&lt;/STRONG&gt; even if you have no coding or data science skills. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aiBuilderAppView.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231028i7F000B1F376D8511/image-size/large?v=v2&amp;amp;px=999" role="button" title="aiBuilderAppView.png" alt="aiBuilderAppView.png" /&gt;&lt;/span&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;&lt;SPAN&gt;What are some of the use cases for AI Builder?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="c2879a1fc7c24de08011e12588d72701"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="a8ea00f62195431daf264e3a15f6839f"&gt;You can use pre-trained models to:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="c2879a1fc7c24de08011e12588d72701"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMllvdSUyMGNhbiUyMHVzZSUyMHByZS10cmFpbmVkJTIwbW9kZWxzJTIwdG8lM0ElMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtdW5vcmRlcmVkJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyRGV0ZWN0JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyb2JqZWN0cyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwZnJvbSUyMGltYWdlcyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFuYWx5emUlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMHlvdXIlMjBjdXN0b21lcnMlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyc2VudGltZW50JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBmcm9tJTIwZmVlZGJhY2slMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtaXRlbSUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJEZXRlY3QlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIya2V5d29yZHMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGZyb20lMjB0ZXh0JTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyRXh0cmFjdCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnNwZWNpZmljJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyaW5mb3JtYXRpb24lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFib3V0JTIweW91ciUyMGJ1c2luZXNzJTIwZnJvbSUyMHRleHQlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="a5f03958d358480e94bab65fb99349ec"&gt;
&lt;UL&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2fa8cc24b74e404abc5686de2a86b58e"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Detect&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;objects&lt;/STRONG&gt; from images&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Analyze&lt;/STRONG&gt; your customers'&amp;nbsp;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;sentiment&lt;/STRONG&gt; from feedback&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;Detect &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;keywords&lt;/STRONG&gt; from text&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="4d479945648743989eb3e507ff18ac10"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Extract&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;specific&lt;/STRONG&gt; &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;information&lt;/STRONG&gt; about your business from text&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Is AI Builder the right choice?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="4ab717d03d9f48b09f4d0045fc4c6cea"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="af4ac214f4014e6bb30d34b4e7c20133"&gt;Great question! There are so &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;many tools&lt;/STRONG&gt; out there and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;many ways to do the same thing&lt;/STRONG&gt;. How do you know which one is the right solution before investing time and effort?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdyZWF0JTIwcXVlc3Rpb24hJTIwVGhlcmUlMjBhcmUlMjBzbyUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJtYW55JTIwdG9vbHMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMG91dCUyMHRoZXJlJTIwYW5kJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMm1hbnklMjB3YXlzJTIwdG8lMjBkbyUyMHRoZSUyMHNhbWUlMjB0aGluZyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyLiUyMEhvdyUyMGRvJTIweW91JTIwa25vdyUyMHdoaWNoJTIwb25lJTIwaXMlMjB0aGUlMjByaWdodCUyMHNvbHV0aW9uJTIwYmVmb3JlJTIwaW52ZXN0aW5nJTIwdGltZSUyMGFuZCUyMGVmZm9ydCUzRiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJJJTIwaGF2ZSUyMGElMjBydWxlJTIwb2YlMjB0aHVtYiUyMHdoZW4lMjBJJTIwd2FudCUyMHRvJTIwYnVpbGQlMjBzb21ldGhpbmclMkMlMjB1c2UlMjB3aGF0ZXZlciUyMGlzJTIwYXZhaWxhYmxlJTIwYW5kJTIwZWFzeSUyMHRvJTIwdXNlJTIwZmlyc3QuJTIwV2hlbiUyMHlvdXIlMjBuZWVkcyUyMGV4Y2VlZCUyMHdoYXQlMjB0aGUlMjB0b29sJTIweW91JTIwYXJlJTIwdXNpbmclMjBjb3ZlcnMlMkMlMjBsb29rJTIwaW50byUyMGFub3RoZXIlMjBzb2x1dGlvbiUyMG9yJTIwYnVpbGRpbmclMjBpdCUyMHlvdXJzZWxmLiUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="8d1e08bc89f440adbb979587bf0c0a51"&gt;I have a rule of thumb when I want to build something, use whatever is available and easy to use first. When your needs exceed what the tool you are using covers, look into another solution or building it yourself.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c lia-indent-padding-left-30px" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;EM&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN style="font-family: inherit;"&gt;Use the tool &lt;/SPAN&gt;&lt;STRONG class="bold-3c254bd9" style="font-family: inherit;" data-slate-leaf="true"&gt;easiest &lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt;to get started when you are building your idea. When your &lt;/SPAN&gt;&lt;STRONG class="bold-3c254bd9" style="font-family: inherit;" data-slate-leaf="true"&gt;needs exceed the capabilities&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; of the tool you are using, find a solution that enables you. Don't invest in building things from scratch before you know it is worth it to do so.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="6fca6b8cb6c4494f8b605de01883f6af"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="8d1e08bc89f440adbb979587bf0c0a51"&gt;For example, if you have an app idea, it is better to have a prototype running as easily as possible. You can test your ideas before investing your time into building custom designed UI or features. In our specific case, you can first prototype your app with the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;drag and drop UI&lt;/STRONG&gt; of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Apps&lt;/STRONG&gt; and using &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;prebuilt AI models&lt;/STRONG&gt;. When your specific needs surface, such as recognizing a particular object or keyword, you can invest your time into creating your custom models to train for the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;object&lt;/STRONG&gt; or &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;keyword detection&lt;/STRONG&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Can I use Power Apps and AI Builder for production?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Yes you can. As any tool that does things magically, AI Builder in Power Apps comes with a cost. That does not mean you can't &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="26639428a6bf4db5831acbfbac43d0ed"&gt;try your ideas out for free&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;What will my production app cost?&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;If you want to go to production with Power Apps, it is a good idea to consider the costs. Thankfully there is an app for that.&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="0fc677a6343a46a2987876cc6c3edba5"&gt; AI Builder Calculator&lt;/A&gt; lets you input what &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI tools you will need&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;how many users&lt;/STRONG&gt; will be accessing your app's AI features and gives you the price it will cost you. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aiBuilderCalculate.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231033iD3C383D708D37493/image-size/large?v=v2&amp;amp;px=999" role="button" title="aiBuilderCalculate.png" alt="aiBuilderCalculate.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkNhbiUyMEklMjB1c2UlMjBQb3dlciUyMEFwcHMlMjBhbmQlMjBBSSUyMEJ1aWxkZXIlMjBmb3IlMjBwcm9kdWN0aW9uJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;What are preview features?&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="282b4d7225064fad9f71fc0a55cbf20d"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="b6129f9acc9b497bb8e96cd0b8813cba"&gt;AI Builder was released for &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;public preview&lt;/STRONG&gt; on June 10, 2019 in Europe and the United States. Preview release features are subject to change and may have restricted functionality before the official release for general availability. Preview releases are not meant for production use. You can try them out and influence the final product by giving feedback. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="282b4d7225064fad9f71fc0a55cbf20d"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyMHdhcyUyMHJlbGVhc2VkJTIwZm9yJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnB1YmxpYyUyMHByZXZpZXclMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMG9uJTIwSnVuZSUyMDEwJTJDJTIwMjAxOSUyMGluJTIwRXVyb3BlJTIwYW5kJTIwdGhlJTIwVW5pdGVkJTIwU3RhdGVzLiUyMFByZXZpZXclMjByZWxlYXNlJTIwZmVhdHVyZXMlMjBhcmUlMjBzdWJqZWN0JTIwdG8lMjBjaGFuZ2UlMjBhbmQlMjBtYXklMjBoYXZlJTIwcmVzdHJpY3RlZCUyMGZ1bmN0aW9uYWxpdHklMjBiZWZvcmUlMjB0aGUlMjBvZmZpY2lhbCUyMHJlbGVhc2UlMjBmb3IlMjBnZW5lcmFsJTIwYXZhaWxhYmlsaXR5LiUyMFByZXZpZXclMjByZWxlYXNlcyUyMGFyZSUyMG5vdCUyMG1lYW50JTIwZm9yJTIwcHJvZHVjdGlvbiUyMHVzZS4lMjBZb3UlMjBjYW4lMjB0cnklMjB0aGVtJTIwb3V0JTIwYW5kJTIwaW5mbHVlbmNlJTIwdGhlJTIwZmluYWwlMjBwcm9kdWN0JTIwYnklMjBnaXZpbmclMjBmZWVkYmFjay4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVGhlJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdlbmVyYWwlMjBBdmFpbGFiaWxpdHklMjAoR0ElMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiklMjByZWxlYXNlJTIwd2lsbCUyMG9jY3VyJTIwaW4lMjBhJTIwcGhhc2VkJTIwbWFubmVyJTJDJTIwd2l0aCUyMHNvbWUlMjBmZWF0dXJlcyUyMHJlbWFpbmluZyUyMGluJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMnByZXZpZXclMjBzdGF0dXMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMHdoaWxlJTIwb3RoZXJzJTIwYXJlJTIwcmVsZWFzZWQlMjBmb3IlMjBHQS4lMjBZb3UlMjBjYW4lMjBjaGVjayUyMG91dCUyMHRoZSUyMHJlbGVhc2UlMjBzdGF0dXMlMjBvbiUyMHRoZSUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZkb2NzLm1pY3Jvc29mdC5jb20lMkZhaS1idWlsZGVyJTJGb3ZlcnZpZXclM0ZXVC5tY19pZCUzRGFpbWwtODQzOC1heXlvbmV0JTIzcmVsZWFzZS1zdGF0dXMlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyMGRvY3VtZW50YXRpb24lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="25042651bf5e4732917ab56ded74ec45"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="21eee968ef634d4292f8107c179a91ad"&gt;The &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;General Availability (GA&lt;/STRONG&gt;) release will occur in a phased manner, with some features remaining in &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;preview status&lt;/STRONG&gt; while others are released for GA. You can check out the release status on the &lt;/SPAN&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/ai-builder/overview?WT.mc_id=aiml-8438-ayyonet#release-status" target="_blank" rel="noopener noreferrer" data-key="3691d834fde7487c9fe97b9d9ef22edb"&gt;&lt;SPAN data-key="fc4c858b230c45b69dec2bc9afbee09b"&gt;AI Builder documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-key="608b72be07c347a886bc97a8450c1018"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AIBuilderPreview.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231034iA92C2930BDEC2F40/image-size/large?v=v2&amp;amp;px=999" role="button" title="AIBuilderPreview.png" alt="AIBuilderPreview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is Object Detection?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:0"&gt;AI Builder Object detection is an AI model that you can train to &lt;/SPAN&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;detect objects in pictures&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="76d497ec7489407280c50dbc704d533d:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkFJJTIwQnVpbGRlciUyME9iamVjdCUyMGRldGVjdGlvbiUyMGlzJTIwYW4lMjBBSSUyMG1vZGVsJTIwdGhhdCUyMHlvdSUyMGNhbiUyMHRyYWluJTIwdG8lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyZGV0ZWN0JTIwb2JqZWN0cyUyMGluJTIwcGljdHVyZXMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjBBSSUyMG1vZGVscyUyMHVzdWFsbHklMjByZXF1aXJlJTIwdGhhdCUyMHlvdSUyMHByb3ZpZGUlMjBzYW1wbGVzJTIwb2YlMjBkYXRhJTIwdG8lMjB0cmFpbiUyMGJlZm9yZSUyMHlvdSUyMGFyZSUyMGFibGUlMjB0byUyMHBlcmZvcm0lMjBwcmVkaWN0aW9ucy4lMjBQcmVidWlsdCUyMG1vZGVscyUyMGFyZSUyMHByZS10cmFpbmVkJTIwYnklMjB1c2luZyUyMGElMjBzZXQlMjBvZiUyMHNhbXBsZXMlMjB0aGF0JTIwYXJlJTIwcHJvdmlkZWQlMjBieSUyME1pY3Jvc29mdCUyQyUyMHNvJTIwdGhleSUyMGFyZSUyMGluc3RhbnRseSUyMHJlYWR5JTIwdG8lMjBiZSUyMHVzZWQlMjBpbiUyMHByZWRpY3Rpb25zLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;. AI models usually require that you provide samples of data to train before you are able to perform predictions. Prebuilt models are pre-trained by using a set of samples that are provided by Microsoft, so they are instantly ready to be used in predictions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="testResultSmall.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231042i6EA424CD5D2B5421/image-size/large?v=v2&amp;amp;px=999" role="button" title="testResultSmall.gif" alt="testResultSmall.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:0"&gt;Object detection can detect up to &lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;500 different objects in a single model &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:2"&gt;and support &lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:3"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;JPG, PNG, BMP&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="081af24dcef14a0282464ba8313518b2:4"&gt; image format or photos through the Power Apps control.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;How to try out Object Detection capabilities?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;You can try out and see how object detection works before having to create and accounts or apps yourself on the &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/services/cognitive-services/computer-vision/?WT.mc_id=aiml-8438-ayyonet#features" target="_blank" rel="noopener noreferrer" data-key="28c652af57a946cf80901e8246cfba85"&gt;Azure Computer Vision&lt;/A&gt; page. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwdHJ5JTIwb3V0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwY2FwYWJpbGl0aWVzJTNGJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="seeItinAction.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231036i7355A9B43AC35791/image-size/large?v=v2&amp;amp;px=999" role="button" title="seeItinAction.png" alt="seeItinAction.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;What can you do with Object Detection?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="942f2107de534f7987d5544f4e90875d"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="fe0837516e874bd2a1ac394fdb918bd3"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="52d8d59bb34246ea9f8faf55151d0dfe"&gt;Object counting and inventory management&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="b1e48731c83948a9959b6dc04d255cb3"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="ba5658e8ddca4d31b65def27aad2274b"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="dab5f0111f004d6a9afdcd016c61c4ef"&gt;Brand logo recognition&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class="" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9iamVjdCUyMGNvdW50aW5nJTIwYW5kJTIwaW52ZW50b3J5JTIwbWFuYWdlbWVudCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkJyYW5kJTIwbG9nbyUyMHJlY29nbml0aW9uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyV2lsZGxpZmUlMjBhbmltYWwlMjByZWNvZ25pdGlvbiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="1f91131820f14e8a908d6f49debcfa89"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="3df08021211e434c96111e6e836d4e8c"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="56629990c4774544909bcedf7b6ac3de"&gt;Wildlife animal recognition&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;How to detect objects from images?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-key="0f7b3a2f2fe2482995c5e193b720e62d"&gt;To start creating your AI model for your app, sign in to&amp;nbsp;&lt;/SPAN&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer noopener noreferrer" data-key="16e04399c06e47448c70fae4072865bd"&gt;&lt;SPAN data-key="e1b05274eca44190a34443d34fc8dcb5"&gt;Power Apps&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&amp;nbsp;and click on AI Builder on the left hand menu. Select Object Detection from the "Refine Model for your business needs" option.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="buildAI.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231037i819A81900F35C690/image-size/large?v=v2&amp;amp;px=999" role="button" title="buildAI.png" alt="buildAI.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&amp;nbsp;Name your new AI model with a unique name. Select Common Objects and proceed to next section.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="commonObj.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231038i7060E000CF9AE3BE/image-size/large?v=v2&amp;amp;px=999" role="button" title="commonObj.png" alt="commonObj.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Name the objects that you are going to detect. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-key="0ec8e5b89aa9410083e03e1225821590" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwc3RhcnQlMjBjcmVhdGluZyUyMHlvdXIlMjBBSSUyMG1vZGVsJTIwZm9yJTIweW91ciUyMGFwcCUyQyUyMHNpZ24lMjBpbiUyMHRvJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJpbmxpbmUlMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGluayUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiUyMmhyZWYlMjIlM0ElMjJodHRwcyUzQSUyRiUyRnBvd2VyYXBwcy5taWNyb3NvZnQuY29tJTJGJTNGV1QubWNfaWQlM0RhaW1sLTg0MzgtYXl5b25ldCUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyUG93ZXIlMjBBcHBzJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbmQlMjBjbGljayUyMG9uJTIwQUklMjBCdWlsZGVyJTIwb24lMjB0aGUlMjBsZWZ0JTIwaGFuZCUyMG1lbnUuJTIwU2VsZWN0JTIwT2JqZWN0JTIwRGV0ZWN0aW9uJTIwZnJvbSUyMHRoZSUyMCU1QyUyMlJlZmluZSUyME1vZGVsJTIwZm9yJTIweW91ciUyMGJ1c2luZXNzJTIwbmVlZHMlNUMlMjIlMjBvcHRpb24uJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="namedObjects.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231039iA728BDAEF57A2F11/image-size/large?v=v2&amp;amp;px=999" role="button" title="namedObjects.png" alt="namedObjects.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:0"&gt;Upload images that contain the object you will detect. To start with you can upload &lt;/SPAN&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;15 images for each object&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="imageDetectionFormat.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231040iD670F3B5FFCDAA51/image-size/medium?v=v2&amp;amp;px=400" role="button" title="imageDetectionFormat.png" alt="imageDetectionFormat.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Make sure each object has approximately the same amount of images tagged. If you have more examples of one object, the training data will be likely to detect that object when it is not.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Tag your objects by selecting a square that your object is in and choosing the name of the object. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="tagging.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231043i3FB7A1BFE1C6E937/image-size/large?v=v2&amp;amp;px=999" role="button" title="tagging.png" alt="tagging.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Once you are done, choose Done Tagging and Train. Training process will take some time.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;If you choose to not use an image or clear any tags, you can do that at any time by going back to your &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;model &lt;/STRONG&gt;under the &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt; on the left hand side menu and choose your model and choose edit.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dontUseImage.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231044iE5EF548261A710DB/image-size/large?v=v2&amp;amp;px=999" role="button" title="dontUseImage.png" alt="dontUseImage.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;AI Builder will give you a Performance score over 100 and a way to quickly test your model before publishing. You can edit your models and retrain to improve your performance. Next section will give you some best practices to improve your performance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="performance.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231045i2E73E8C4D5C4E3E9/image-size/large?v=v2&amp;amp;px=999" role="button" title="performance.png" alt="performance.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="4f33686d83e04b679eb0bb6395816807:2" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVwbG9hZCUyMGltYWdlcyUyMHRoYXQlMjBjb250YWluJTIwdGhlJTIwb2JqZWN0JTIweW91JTIwd2lsbCUyMGRldGVjdC4lMjBUbyUyMHN0YXJ0JTIwd2l0aCUyMHlvdSUyMGNhbiUyMHVwbG9hZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIxNSUyMGltYWdlcyUyMGZvciUyMGVhY2glMjBvYmplY3QlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMi4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to Improve Your Custom Model Performance?&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="f561f804df1241b192b66665ca8c0ceb"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="f9aa28c6644f4a46a5f388e4ba7621e1"&gt;Getting the best model performance for your business can be an iterative process. Results can vary depending on the customizations you make to the model, and the training data you provide.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkdldHRpbmclMjB0aGUlMjBiZXN0JTIwbW9kZWwlMjBwZXJmb3JtYW5jZSUyMGZvciUyMHlvdXIlMjBidXNpbmVzcyUyMGNhbiUyMGJlJTIwYSUyMHJhdGhlciUyMGl0ZXJhdGl2ZSUyMHByb2Nlc3MuJTIwUmVzdWx0cyUyMGNhbiUyMHZhcnklMjBkZXBlbmRpbmclMjBvbiUyMHRoZSUyMGN1c3RvbWl6YXRpb25zJTIweW91JTIwbWFrZSUyMHRvJTIwdGhlJTIwbW9kZWwlMkMlMjBhbmQlMjB0aGUlMjB0cmFpbmluZyUyMGRhdGElMjB5b3UlMjBwcm92aWRlLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJUbyUyMGhlbHAlMjBmYWNpbGl0YXRlJTIwdGhpcyUyMHByb2Nlc3MlMkMlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyQUklMjBCdWlsZGVyJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBhbGxvd3MlMjB5b3UlMjB0byUyMGhhdmUlMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIybXVsdGlwbGUlMjB2ZXJzaW9ucyUyMG9mJTIweW91ciUyMG1vZGVsJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjIlMjBzbyUyMHlvdSUyMGNhbiUyMHVzZSUyMHlvdXIlMjBtb2RlbCUyMGFuZCUyMGNvbnRpbnVlJTIwdG8lMjBpbXByb3ZlJTIwaXQlMjBhdCUyMHRoZSUyMHNhbWUlMjB0aW1lLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;To help facilitate this process, &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;AI Builder&lt;/STRONG&gt; allows you to have &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;multiple versions of your model&lt;/STRONG&gt; so you can use your model and continue to improve it at the same time.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 class="blockParagraph-544a408c" data-key="86c82c3de6594d0aaa781bb7b773314e"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;What are some best practices for training for object detection?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Use diverse images &lt;/STRONG&gt;to train with all possible use cases. For example if you are training your data to detect a VR headset, use images of the headset used in different environments as well as the out of the box images. If you only train with images with people wearing the headset, your model would not recognize images of the same device when it is in its box.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PXL_20201007_121129483.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231047i4D7AEF341316FAAF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PXL_20201007_121129483.jpg" alt="PXL_20201007_121129483.jpg" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="-1x-1.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231048iF96BD3B8915D96AC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="-1x-1.jpg" alt="-1x-1.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;Use images with a variety of &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;backgrounds.&lt;/STRONG&gt; Photos in context are better than photos in front of neutral backgrounds.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PXL_20201007_121045280.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231049iA5D13D928462B5BC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PXL_20201007_121045280.jpg" alt="PXL_20201007_121045280.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;Use training images that have different &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;lighting&lt;/STRONG&gt;. For example, include images taken with flash, high exposure, and so on.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2da672f6d5a3410ea3ae58a93f23e1d2"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231051iC52BAA5292ABDC25/image-size/medium?v=v2&amp;amp;px=400" role="button" title="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" alt="00100lrPORTRAIT_00100_BURST20191202194227961_COVER.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class=""&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="4861a2c304614e2e87151e42a5d72ffe"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="70b94925cab840db867c5525a6655c6b"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="00c3fe4ff62f4a1ab07f9e8074c80fb5"&gt;Use images of objects in varied sizes. Different sizing helps the model generalize better.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI class="" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlVzZSUyMGltYWdlcyUyMG9mJTIwb2JqZWN0cyUyMGluJTIwdmFyaWVkJTIwc2l6ZXMuJTIwRGlmZmVyZW50JTIwc2l6aW5nJTIwaGVscHMlMjB0aGUlMjBtb2RlbCUyMGdlbmVyYWxpemUlMjBiZXR0ZXIuJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LWl0ZW0lMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwaW1hZ2VzJTIwdGFrZW4lMjBmcm9tJTIwZGlmZmVyZW50JTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMmFuZ2xlcyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyLiUyMElmJTIwYWxsJTIweW91ciUyMHBob3RvcyUyMGFyZSUyMGZyb20lMjBhJTIwc2V0JTIwb2YlMjBmaXhlZCUyMGNhbWVyYXMlMjBzdWNoJTIwYXMlMjBzdXJ2ZWlsbGFuY2UlMjBjYW1lcmFzJTJDJTIwYXNzaWduJTIwYSUyMGRpZmZlcmVudCUyMGxhYmVsJTIwdG8lMjBlYWNoJTIwY2FtZXJhLiUyMFRoaXMlMjBjYW4lMjBoZWxwJTIwYXZvaWQlMjBtb2RlbGluZyUyMHVucmVsYXRlZCUyMG9iamVjdHMlMjBzdWNoJTIwYXMlMjBsYW1wcG9zdHMlMjBhcyUyMHRoZSUyMGtleSUyMGZlYXR1cmUuJTIwQXNzaWduJTIwY2FtZXJhJTIwbGFiZWxzJTIwZXZlbiUyMGlmJTIwdGhlJTIwY2FtZXJhcyUyMGNhcHR1cmUlMjB0aGUlMjBzYW1lJTIwb2JqZWN0cy4lMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="be0afeed7b3b41f3a42bc5252b58b860"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="622c24691f184b96ba7272b522f0e3ab"&gt;Use images taken from different &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;angles&lt;/STRONG&gt;. If all your photos are from a set of fixed cameras such as surveillance cameras, assign a different label to each camera. This can help avoid modeling unrelated objects such as lampposts as the key feature. Assign camera labels even if the cameras capture the same objects.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="02e0029c11b84aa181c9411b658e4785"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" style="width: 300px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231052i3F1E5736CAB35567/image-size/medium?v=v2&amp;amp;px=400" role="button" title="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" alt="00100lPORTRAIT_00100_BURST20190429130136402_COVER.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;How to share your models?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="619d4823bb21470db9d121082e6bd572"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="2b4cedbec4df45a08c9e9ef5eddc9ef6"&gt;By default, only you can see the models you create and publish. This feature allows you to test them and use them within apps and flows without exposing them.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="619d4823bb21470db9d121082e6bd572"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkJ5JTIwZGVmYXVsdCUyQyUyMG9ubHklMjB5b3UlMjBjYW4lMjBzZWUlMjB0aGUlMjBtb2RlbHMlMjB5b3UlMjBjcmVhdGUlMjBhbmQlMjBwdWJsaXNoLiUyMFRoaXMlMjBmZWF0dXJlJTIwYWxsb3dzJTIweW91JTIwdG8lMjB0ZXN0JTIwdGhlbSUyMGFuZCUyMHVzZSUyMHRoZW0lMjB3aXRoaW4lMjBhcHBzJTIwYW5kJTIwZmxvd3MlMjB3aXRob3V0JTIwZXhwb3NpbmclMjB0aGVtLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJJZiUyMHlvdSUyMHdhbnQlMjBvdGhlcnMlMjB0byUyMHVzZSUyMHlvdXIlMjBtb2RlbCUyQyUyMHlvdSUyMGNhbiUyMHNoYXJlJTIwaXQlMjB3aXRoJTIwc3BlY2lmaWMlMjB1c2VycyUyQyUyMGdyb3VwcyUyQyUyMG9yJTIweW91ciUyMHdob2xlJTIwb3JnYW5pemF0aW9uLiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="90a12d0f0e6a4f378391d07edb57dfaf"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="c739c2ab03294129861e2687f7e9639f"&gt;If you want others to use your model, you can share it with specific users, groups, or your whole organization.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="90a12d0f0e6a4f378391d07edb57dfaf"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;How to use your Custom Vision model in a Power App?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:0"&gt;Once you are happy with your model's performance, you can add it to a new app by choosing &lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:1"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Use model&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:2"&gt; and &lt;/SPAN&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;New app.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="createPApp.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231053iF0E300B00A9520DE/image-size/large?v=v2&amp;amp;px=999" role="button" title="createPApp.png" alt="createPApp.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;You will be redirected to Power App editor and an Object Detection component that uses your model will be added automatically. In the editor, you can add new pages to navigate, design and customize your pages.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerAppEditor.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231054iA6A607B2EC5B9924/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerAppEditor.png" alt="powerAppEditor.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Once you are happy with the design, you can publish and share your app. You can use your new app by downloading Power Apps from &lt;A href="https://apps.apple.com/us/app/power-apps/id1047318566" target="_blank" rel="noopener"&gt;Apple&lt;/A&gt;, &lt;A href="https://play.google.com/store/apps/details?id=com.microsoft.msapps&amp;amp;hl=en_US&amp;amp;gl=US" target="_blank" rel="noopener"&gt;Android&lt;/A&gt; or &lt;A href="https://www.microsoft.com/en-us/p/power-apps/9nblggh5z8f3?ocid=9nblggh5z8f3_ORSEARCH_Bing&amp;amp;rtc=1#activetab=pivot:overviewtab" target="_blank" rel="noopener"&gt;Microsoft&lt;/A&gt; stores. Once you sign in, your app will be listed in the mobile Power Apps.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="powerAppsPlayStore.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231058iE1472CF4D467818A/image-size/large?v=v2&amp;amp;px=999" role="button" title="powerAppsPlayStore.png" alt="powerAppsPlayStore.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;What's next?&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;Now you have your app's prototype, you can add more features, get feedback and test your app.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-offset-key="5671fd0ad5a54aa285a9eacf07016166:3" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk9uY2UlMjB5b3UlMjBhcmUlMjBoYXBweSUyMHdpdGglMjB5b3UlMjBtb2RlbCdzJTIwcGVyZm9ybWFuY2UlMkMlMjB5b3UlMjBjYW4lMjBhZGQlMjBpdCUyMHRvJTIwYSUyMG5ldyUyMGFwcCUyMGJ5JTIwY2hvb3NpbmclMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyVXNlJTIwbW9kZWwlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMm1hcmslMjIlMkMlMjJ0eXBlJTIyJTNBJTIyYm9sZCUyMiUyQyUyMmRhdGElMjIlM0ElN0IlN0QlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMGFuZCUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJOZXclMjBhcHAuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJtYXJrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmJvbGQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;Should I keep using my power app or rebuild it?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="423db96917dd45e9a50460b43519e456"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="e45e3f2f197f4dc7b47b2aadc4bc6b87"&gt;When your needs change, you can consider refactoring your application to a serverless backend and a custom built UI. If the app is working fine for you and your users, you can continue using and improving overtime using Power Apps. &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc"&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--commentsAreaHighlight-e689c7a4" contenteditable="false"&gt;‌&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;P class="blockParagraph-544a408c" data-key="42bacddf1fa04516afd7deca73c1d59d"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="c7685faf4b174896a84696db7cc274b5"&gt;What would be the changes that require the upgrade? There are two possibilities for the changed requirements for your app:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="blockParagraph-544a408c" data-key="42bacddf1fa04516afd7deca73c1d59d"&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="reset-3c756112--withControls-56f27afc" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMldoZW4lMjB5b3VyJTIwbmVlZHMlMjBjaGFuZ2UlMkMlMjB5b3UlMjBjYW4lMjBjb25zaWRlciUyMHJlZmFjdG9yaW5nJTIweW91ciUyMGFwcGxpY2F0aW9uJTIwdG8lMjBhJTIwc2VydmVybGVzcyUyMGJhY2tlbmQlMjBhbmQlMjBhJTIwY3VzdG9tJTIwYnVpbHQlMjBVSS4lMjBJZiUyMHRoZSUyMGFwcCUyMGlzJTIwd29ya2luZyUyMGZpbmUlMjBmb3IlMjB5b3UlMjBhbmQlMjB5b3VyJTIwdXNlcnMlMkMlMjB5b3UlMjBjYW4lMjBjb250aW51ZSUyMHVzaW5nJTIwYW5kJTIwaW1wcm92aW5nJTIwb3ZlcnRpbWUlMjB1c2luZyUyMFBvd2VyJTIwQXBwcy4lMjAlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyV2hhdCUyMHdvdWxkJTIwYmUlMjB0aGUlMjBjaGFuZ2VzJTIwdGhhdCUyMHJlcXVpcmVzJTIwdGhlJTIwdXBncmFkZSUzRiUyMFRoZXJlJTIwYXJlJTIwdHdvJTIwcG9zc2liaWxpdGllcyUyMGZvciUyMHRoZSUyMGNoYW5nZWQlMjByZXF1aXJlbWVudHMlMjBmb3IlMjB5b3VyJTIwYXBwJTNBJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkZlYXR1cmUlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpc3QtaXRlbSUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIycGFyYWdyYXBoJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJ0ZXh0JTIyJTJDJTIybGVhdmVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybGVhZiUyMiUyQyUyMnRleHQlMjIlM0ElMjJCdWRnZXQlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;
&lt;DIV class="reset-3c756112--sideControlsWrapper-009b974d"&gt;
&lt;DIV class="reset-3c756112--commentsArea-56f27afc"&gt;
&lt;DIV class="reset-3c756112--contentWrapper-56f27afc" role="presentation"&gt;
&lt;DIV class="reset-3c756112--listItemContent-756c9114" data-key="304bf6290945455ea4640fe42ac13db9"&gt;
&lt;UL&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="48b71c16a4a446c58f46f443686efc84"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;SPAN data-key="f2209c25d08b4d69998637fe161cca0a"&gt;Feature&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="48b71c16a4a446c58f46f443686efc84"&gt;Budget&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to create a custom feature for Power Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Ready made tools are always limited to the features the product team decides to include. If you are writing custom code, you can add any feature that you need. Thankfully, for the features that are not implemented yet, it is always possible to author a custom connector that you can use with or without Power Apps.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;A &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;connector&lt;/STRONG&gt; is a &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;proxy&lt;/STRONG&gt; or a &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;wrapper around an API&lt;/STRONG&gt; that allows the underlying service to talk to Microsoft &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Power Automate&lt;/STRONG&gt;, &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Microsoft Power Apps,&lt;/STRONG&gt; and &lt;STRONG class="bold-3c254bd9" data-slate-leaf="true"&gt;Azure Logic Apps&lt;/STRONG&gt;. It provides a way for users to connect their accounts and leverage a set of pre-built actions and triggers to build their apps and workflows.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Check out the list of &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="ba9ff40420a549ea8bdbe38ca764c77d"&gt;Power Apps Connectors &lt;/A&gt;and &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/connectors/custom-connectors/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="f4ad6e9f054041af9df382a4a9168fdb"&gt;how to build a custom connector&lt;/A&gt; yourself. &lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;How to compare costs for Power Apps and Logic Apps?&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJoZWFkaW5nLTIlMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkhvdyUyMHRvJTIwY3JlYXRlJTIwYSUyMGN1c3RvbSUyMGZlYXR1cmUlMjBmb3IlMjBQb3dlciUyMEFwcHMlM0YlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCU1RCU3RA=="&gt;Once you start using your app, you will have a better idea about the number of users accessing AI capabilities and the number of images that you need to train. You can use &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://powerapps.microsoft.com/en-us/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="158bda7278024faf9a587fbfefa74db4"&gt;AI Builder Cost Calculator&lt;/A&gt; and &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/pricing/details/logic-apps/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="09f2ab90b2a847f38d49ef5619611a01"&gt;Logic App Cost Calculator&lt;/A&gt; to compare options. You can check any other service price through &lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/pricing/calculator/?service=logic-apps&amp;amp;WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="7c4dcbf2795040949ad61f72abd9e3b8"&gt;Azure Product Cost Calculator&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;Additional Resources&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_self"&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;Artificial Intelligence for Developers&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;&lt;A title="Cognitive Services Overview" href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener"&gt;Cognitive Services Overview&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/powerapps/maker/signup-for-powerapps?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="940d8a03438c4049bbdd740cfc6335bd" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmlubGluZSUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaW5rJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTIyaHJlZiUyMiUzQSUyMmh0dHBzJTNBJTJGJTJGZG9jcy5taWNyb3NvZnQuY29tJTJGcG93ZXJhcHBzJTJGbWFrZXIlMkZzaWdudXAtZm9yLXBvd2VyYXBwcyUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlBvd2VyJTIwQXBwcyUyMEZyZWUlMjBUcmlhbCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;&lt;SPAN data-key="0be4c78ec89747a28c35fa10b7f39793"&gt;Power Apps Free Trial&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="reset-3c756112--listItemContent-756c9114" data-key="3d4576f9d943425f82fcbff6d504d775"&gt;
&lt;P class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="d69ae82c8f9f4b0bbec6bb4785ab1c81"&gt;&lt;SPAN class="text-4505230f--TextH400-3033861f--textContentFamily-49a318e1"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/power-platform/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="e2d8bac36dcf4340bf5afdb6e5918a95"&gt;&lt;SPAN data-key="3db9d9eaafd7472983375cc5e7f4a680"&gt;Power Platform Documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI class="blockParagraph-544a408c--noMargin-acdf7afa" data-key="fbef39a01f05416db4b9c1b50a66d2a1"&gt;&lt;SPAN data-key="a1661c95f8a141cfb7468f7409daf18c"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://docs.microsoft.com/en-us/connectors/connector-reference/connector-reference-powerapps-connectors?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="27da6edf246a4d4b9759b554bce97421" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZkb2NzLm1pY3Jvc29mdC5jb20lMkZlbi11cyUyRmNvbm5lY3RvcnMlMkZjb25uZWN0b3ItcmVmZXJlbmNlJTJGY29ubmVjdG9yLXJlZmVyZW5jZS1wb3dlcmFwcHMtY29ubmVjdG9ycyUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjIlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkxpc3QlMjBvZiUyMFBvd2VyJTIwQXBwcyUyMENvbm5lY3RvcnMlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCU3RCU1RCU3RCU1RCU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdEJTVEJTdE"&gt;List of Power Apps Connectors&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" style="background-color: #ffffff;" href="https://docs.microsoft.com/ai-builder/overview?WT.mc_id=aiml-8438-ayyonet#release-status" target="_blank" rel="noopener noreferrer" data-key="036857cd5d28471398af1b19c7338c24"&gt;AI Builder Release Status&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" style="font-family: inherit; background-color: #ffffff;" href="https://powerapps.microsoft.com/en-us/ai-builder-calculator/?WT.mc_id=aiml-8438-ayyonet" target="_blank" rel="noopener noreferrer" data-key="bc7e432b04814d829ee3f48d4366c534"&gt;&lt;SPAN data-key="6ece649bd56c4515a8ee954050659dc8"&gt;AI Builder Cost Calculator&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-key="6ece649bd56c4515a8ee954050659dc8"&gt;&lt;A class="link-a079aa82--primary-53a25e66--link-faf6c434" href="https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/?WT.mc_id=aiml-8438-ayyonet#features" target="_blank" rel="noopener noreferrer" data-key="f2f54aa52cb34698a5d112d487f23134" data-slate-fragment="JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJsaXN0LXVub3JkZXJlZCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIyYmxvY2slMjIlMkMlMjJ0eXBlJTIyJTNBJTIybGlzdC1pdGVtJTIyJTJDJTIyaXNWb2lkJTIyJTNBZmFsc2UlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIyaW5saW5lJTIyJTJDJTIydHlwZSUyMiUzQSUyMmxpbmslMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlMjJocmVmJTIyJTNBJTIyaHR0cHMlM0ElMkYlMkZhenVyZS5taWNyb3NvZnQuY29tJTJGZW4tdXMlMkZzZXJ2aWNlcyUyRmNvZ25pdGl2ZS1zZXJ2aWNlcyUyRmNvbXB1dGVyLXZpc2lvbiUyRiUzRldULm1jX2lkJTNEYWltbC04NDM4LWF5eW9uZXQlMjNmZWF0dXJlcyUyMiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyQ29tcHV0ZXIlMjBWaXNpb24lMjBPdmVydmlldyUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTdEJTVEJTdEJTVEJTdEJTJDJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0QlNUQlN0Q="&gt;Computer Vision Overview&lt;/A&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;Leave a comment below for your AI application use cases and the tutorials you would like to see.&lt;/EM&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 15 Mar 2021 21:14:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-prototyping-a-no-code-solution-with-power-apps/ba-p/2189550</guid>
      <dc:creator>Yonet</dc:creator>
      <dc:date>2021-03-15T21:14:46Z</dc:date>
    </item>
    <item>
      <title>Mask detection now available in preview via Azure Cognitive Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/mask-detection-now-available-in-preview-via-azure-cognitive/ba-p/2194157</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;COVID-19 spread has changed our day-to-day life in unprecedented ways. Organizations around the world are taking action to, contain and help prevent further spread of the disease by using AI technologies like computer vision, help ensure the safety of the employees and customers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Cognitive Services now, provides Mask detection functionality, to assist application developers in building solutions that can help monitor and contain the spread. Mask detection can be deployed anywhere, from the cloud leveraging Face service, to the edge using Spatial analysis service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Mask detection on the edge&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge" target="_blank" rel="noopener"&gt;Spatial analysis&lt;/A&gt; is, a capability of &lt;FONT size="3"&gt;Computer&lt;/FONT&gt; Vision, part of Azure Cognitive Services. This capability understands people’s movements in a physical space by analyzing real-time video, significantly increasing efficiency, and providing valuable insights for enabling various scenarios including,&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Counting people in a space for maximum occupancy&lt;/LI&gt;
&lt;LI&gt;Understanding the distance between people for social distancing measures&lt;/LI&gt;
&lt;LI&gt;Determining customer footfall such as in retail spaces&lt;/LI&gt;
&lt;LI&gt;Determining wait time in a checkout line&lt;/LI&gt;
&lt;LI&gt;Determining trespassing in protected areas&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Spatial analysis can now detect whether a person is wearing a protective face covering or not. With this new capability, businesses can leverage insights to build applications that can measure safety and enhance compliance. For example, a business can aggregate data of percentage of people wearing masks in a physical space to improve compliance measures. To help ensure the safety of people working in a given space, &lt;SPAN&gt;mask detection can also be used to notify when a person may accidentally enter the space without a face mask.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Mask detection can be enabled for the following spatial analysis operations - &lt;EM&gt;personcount&lt;/EM&gt;, &lt;EM&gt;personcrossingline&lt;/EM&gt; and &lt;EM&gt;personcrossingpolygon&lt;/EM&gt;. The classifier model can be enabled by configuring the ‘ENABLE_FACE_MASK_CLASSIFIER’ parameter to True, this is disabled by default. The attributes, &lt;EM&gt;face_mask&lt;/EM&gt; or &lt;EM&gt;face_noMask&lt;/EM&gt;, will be returned as metadata with confidence score for each person detected in the video stream.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Spatial_analysis.jpg" style="width: 567px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261767iFB9D37F51D3F21F1/image-dimensions/567x355?v=v2" width="567" height="355" role="button" title="Spatial_analysis.jpg" alt="Face mask and Person detection with Spatial analysis" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Face mask and Person detection with Spatial analysis&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Spatial analysis operations provide a real-time video analysis pipeline on new and existing RTSP cameras. The deployment of the spatial analysis container on edge devices is facilitated by Azure IoT Hub. When video is streamed and processed by spatial analysis, the container emits AI insight events about people’s movement which in turn are sent to Azure IoT Hub as IoT telemetry. From IoT Hub you can create various routes to other Azure services and build your business solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="spatial_analysis_container.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261768i608FB98277291AFC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="spatial_analysis_container.png" alt="Spatial analysis container deployment with Azure IoT" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Spatial analysis container deployment with Azure IoT&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;The events from each operation are egressed to Azure IoT Hub on JSON format. Sample JSON for an event output by&lt;EM&gt; cognitiveservices.vision.spatialanalysis-personcount&lt;/EM&gt; operation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "events": [
        {
            "id": "3733eb36935e4d73800a9cf36185d5a2",
            "type": "personLineEvent",
            "detectionIds": [
                "90d55bfc64c54bfd98226697ad8445ca"
            ],
            "properties": {
                "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
                "status": "CrossLeft"
            },
            "zone": "doorcamera"
        }
    ],
    "sourceInfo": {
        "id": "camera_id",
        "timestamp": "2020-08-24T06:06:53.261Z",
        "width": 608,
        "height": 342,
        "frameId": "1340",
        "imagePath": ""
    },
    "detections": [
        {
            "type": "person",
            "id": "90d55bfc64c54bfd98226697ad8445ca",
            "region": {
                "type": "RECTANGLE",
                "points": [
                    {
                        "x": 0.491627341822574,
                        "y": 0.2385801348769874
                    },
                    {
                        "x": 0.588894994635331,
                        "y": 0.6395559924387793
                    }
                ]
            },
            "confidence": 0.9005028605461121,
            "metadata": {
	        "attributes": {
	            "face_Mask": 0.99
	        }
	    }
        }
    ],
    "schemaVersion": "1.0"
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn how to build business applications with spatial analysis, follow these &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_blank" rel="noopener"&gt;instructions&lt;/A&gt; to deploy a sample Azure Web Application that presents a live view of people counting events in a physical space. You can modify this app with other spatial analysis operations and make modifications based on the event output of the container.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="5"&gt;Mask detection in the cloud&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;Mask detection is also available through the Face Detection cloud endpoint in Azure Cognitive&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/face/" target="_blank" rel="noopener"&gt;Face API&lt;/A&gt;&amp;nbsp;Service. This capability analyses images, detects one or more human faces along with attributes for each face in the image. Face mask attribute is available with the latest detection_03 model, along with additional attribute &lt;EM&gt;“noseAndMouthCovered” &lt;/EM&gt;that provides insight about whether the mask covers both the nose and mouth.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To leverage the latest mask detection capability, users need to specify the detection model in the API request - assign the model version with the detectionModel&amp;nbsp;parameter to detection_03. Refer to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model" target="_blank" rel="noopener"&gt;How to specify a detection model&lt;/A&gt; to learn more about the capabilities of each detection model and sample code to call it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="facemask.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261770i6009C7B79E61C4C0/image-size/large?v=v2&amp;amp;px=999" role="button" title="facemask.png" alt="Face mask detection with Face Service" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Face mask detection with Face Service&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Detection_03 API response with face mask attribute:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;  {
    "faceId": "eee58bd3-0b54-4f48-9a96-c9c60724ee80",
    "faceRectangle": {
      "top": 171,
      "left": 1212,
      "width": 79,
      "height": 125
    },
    "faceAttributes": {
      "mask": {
        "type": "faceMask",
        "noseAndMouthCovered": “true”
      }
  },
  {                         
   "faceId": "2d83c3c1-7266-4b84-b47b-a65645368021",
    "faceRectangle": {
      "top": 364,
      "left": 600,
      "width": 66,
      "height": 80
    },
    "faceAttributes": {
      "mask": {
        "type": "faceMask",
        "noseAndMouthCovered": “true”
     }
  },
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Responsible AI and Deployment Guide&lt;/H3&gt;
&lt;P&gt;Microsoft’s principled approach enables developers to build rich solutions while ensuring responsible use.&lt;/P&gt;
&lt;P&gt;Responsible &lt;A href="https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/" target="_blank" rel="noopener"&gt;deployment recommendations&lt;/A&gt; for spatial analysis is provided in accordance with Microsoft &lt;A href="https://www.microsoft.com/ai/responsible-ai" target="_blank" rel="noopener"&gt;Responsible AI Principles&lt;/A&gt;: fairness, reliability &amp;amp; safety, privacy &amp;amp; security, inclusiveness, transparency, and human accountability. For general guidelines and specific recommendations for height, angle, and camera-to-focal-point-distance, see &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-camera-placement" target="_blank" rel="noopener"&gt;Camera placement guide&lt;/A&gt;. And refer to Face API &lt;A href="https://azure.microsoft.com/en-us/resources/transparency-note-azure-cognitive-services-face-api/" target="_blank" rel="noopener"&gt;Transparency Note&lt;/A&gt; to get clear guidance on use of facial recognition to help ensure it fits your goals and achieve accurate results.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get Started&lt;/H3&gt;
&lt;P&gt;Learn more with our documentation &lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-container" target="_blank" rel="noopener"&gt;Spatial analysis&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/quickstarts/client-libraries?tabs=visual-studio&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;QuickStart: Face Service&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Follow the tutorial to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_blank" rel="noopener"&gt;Create a People Counting Web App&lt;/A&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/tutorials/faceapiincsharptutorial" target="_blank" rel="noopener"&gt;Detect faces using the .NET SDK&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Learn about &lt;A href="https://techcommunity.microsoft.com/Azure%20Stack%20Edge" target="_blank" rel="noopener"&gt;Azure Stack Edge&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/services/iot-hub" target="_blank" rel="noopener"&gt;Azure IoT Hub&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 08 Mar 2021 21:11:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/mask-detection-now-available-in-preview-via-azure-cognitive/ba-p/2194157</guid>
      <dc:creator>vaparth</dc:creator>
      <dc:date>2021-03-08T21:11:35Z</dc:date>
    </item>
    <item>
      <title>Put AI into practice with Microsoft's Azure AI Hackathon</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/put-ai-into-practice-with-microsoft-s-azure-ai-hackathon/ba-p/2193807</link>
      <description>&lt;P&gt;If you’ve been looking for a reason to get started with AI to solve a particular problem or use case, look no further! We invite you to put your skills to the test and apply Azure AI to a new or existing project. As you may have seen in an earlier &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678" target="_blank" rel="noopener"&gt;post by Anand Raman&lt;/A&gt;, we have been hosting an &lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Azure AI hackathon&lt;/A&gt; in which you can submit your project and be eligible to win prizes. Developers of all backgrounds and skill levels are welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models with Cognitive Services or building your own custom ML models with Azure Machine Learning.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azureai.devpost.com/" target="_self"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_0-1615224072279.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261685i967787DF729913DB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_0-1615224072279.png" alt="AI Hackathon homepage" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;AI Hackathon homepage&lt;/span&gt;&lt;/span&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you’re interested in participating, visit the &lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Azure AI Hackathon page&lt;/A&gt; to get started. The deadline is April 5&lt;SUP&gt;th&lt;/SUP&gt; so you still have time to build and submit a project! Use one or more of the following Azure AI services to build a new project or update an existing project:&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank"&gt;Azure Machine Learning&lt;/A&gt;,&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/" target="_blank"&gt;Azure Cognitive Services&lt;/A&gt;, &lt;A href="https://github.com/microsoft/botframework-sdk" target="_self"&gt;Bot Framework&lt;/A&gt; and&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_blank"&gt;Azure Cognitive Search&lt;/A&gt;.&amp;nbsp;Projects may integrate with other Azure services, open source technologies (including but not limited to frameworks, libraries, and APIs) and physical hardware of your choice.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you’re looking for a little inspiration, below are a few examples of past winners:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 First Place– Trashé&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_1-1615224072288.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261684iA67BCD961646B0CE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_1-1615224072288.png" alt="Trashe Smarter Recycling solution" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Trashe Smarter Recycling solution&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Submitted by Nathan Glover and Stephen Mott, Trashé is a SmartBin that aims to help people make more informed recycling decisions. While the idea is super impactful, it’s even more powerful when you see it in action- not just the intelligence, but the end-to-end scenario of how it can be applied in a real-world environment.&lt;/P&gt;
&lt;P&gt;This team used many Azure services to connect the hardware, intelligence, and presentation layers—you can see this is a well-researched architecture that is reusable in multiple scenarios.&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/classify-images-with-custom-vision-service/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Custom Vision&lt;/A&gt;&amp;nbsp;was a great choice in this case, enabling the team create a well performing model with very little training data. The more we recycle, the better the model will get. It was great to see the setup instructions included to help build unique versions of Trashé so users can contribute to helping the environment by recycling correctly within their local communities—this community approach is incredibly scalable.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 Second Place- AfriFarm&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_2-1615224072292.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261683i8BEE1DF9CE5A2F28/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_2-1615224072292.png" alt="wmendoza_2-1615224072292.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Niza Siwale’s app recognizes crop diseases from images using&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-machine-learning-service/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;&amp;nbsp;and publishes the findings so anyone can track disease breakouts. This also provides a real-time update for government agencies to act quickly and provide support to affected communities. As quoted by Niza, this project has an incredible reach to a possible 200 million farmers whose livelihoods depend on farming in Africa.&lt;/P&gt;
&lt;P&gt;Creating a simple Android application where users can take photos of crops to analyze if each farmer is getting information when they need it, users can also contribute their own findings back to the community around them keeping everyone more informed and connected. Using the popular Keras framework along with the Azure Machine Learning service, this project built and deployed a good plant disease recognition model which could be called from the application. Any future work or improved versions of models can be monitored and deployed in a development cycle. From this, the progression of the model can be tracked over time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2019 Third Place- Water Level Anomaly detector&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_3-1615224072301.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261686i03A8EDC1BC4C9CAE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_3-1615224072301.png" alt="wmendoza_3-1615224072301.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Roy Kincaid’s project identifies drastic changes in water levels using an ultrasonic sensor that could be useful for detecting potential floods and natural disasters. This information can then be used to provide adequate warning for people to best prepare to major changes in their environment. Water Level Anomaly Detector could also be beneficial for long-term analysis of the effects of climate change. This is another great example of an end-to-end intelligent solution.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Roy is well skilled in the hardware and connection parts of this project, so it was brilliant to see the easy integration of the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Anomaly Detector API&lt;/A&gt;&amp;nbsp;from&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/?WT.mc_id=azureaihackathon-blog-amynic" target="_blank" rel="noopener"&gt;Azure Cognitive Services&lt;/A&gt;&amp;nbsp;and to hear how quickly Roy could start using the service. Many IoT scenarios have a similar need for detecting rates and levels; in fact, Roy had hinted at coffee level detector in the future.&amp;nbsp; In a world where we all want to do our part to help the environment, it’s a great example of how monitoring enables us to measure changes over time and be alerted when issues arise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These are just 3 of the past winners and submissions. For more inspiration, visit our &lt;A href="https://azureai2019.devpost.com/project-gallery" target="_blank" rel="noopener"&gt;gallery of past submissions&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_4-1615224072337.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/261687iD18EF85D12B00E6A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_4-1615224072337.png" alt="wmendoza_4-1615224072337.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Resources to get started&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azureai.devpost.com/" target="_blank" rel="noopener"&gt;Sign up for the Azure AI hackathon&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Visit our &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;AI for Developers resources page&lt;/A&gt; for tutorials and a curated 30-day learning journey&lt;/LI&gt;
&lt;LI&gt;Visit our &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;ML for Data Scientists resources page&lt;/A&gt; for tutorials and a curated 30-day learning journey&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 08 Mar 2021 17:52:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/put-ai-into-practice-with-microsoft-s-azure-ai-hackathon/ba-p/2193807</guid>
      <dc:creator>wmendoza</dc:creator>
      <dc:date>2021-03-08T17:52:41Z</dc:date>
    </item>
    <item>
      <title>Form Recognizer  now reads more languages, processes IDs and invoices, trains on tables, and more</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428</link>
      <description>&lt;P&gt;Documents contain invaluable information powering core business processes. Extracting information from these documents with minimum manual intervention helps bolster organizational efficiency and productivity. As more and more processes and workflows get automated, the need for new features to help extract text and structures increases.&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce newest updates to Form Recognizer that will be available on March 15, 2021.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What’s New?&lt;/H1&gt;
&lt;P&gt;Form Recognizer v2.1 public preview 3 will be available on March 15, 2021, and it will include:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Extract data from invoices&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Invoices are complex documents that vary in structure and contain data that is vital to organizations business processes. One of the most challenging tasks in extracting data from invoices is extracting data from invoices line items. &amp;nbsp;The Form Recognizer Invoice API now supports line-item extraction, it also extracts now the full line item and its parts – description, amount, quantity, product ID, date and more. With a simple API \ SDK call you can extract all the data from your invoices – text, tables, key value pairs and line items.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_0-1614713996067.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260103iD80DAC23070C7A6D/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_0-1614713996067.png" alt="chril1_0-1614713996067.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 1 Line items are extracted from invoices&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Extract data from IDs&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The new pre-built ID model enables customers to take worldwide passports and U.S. drivers license and return structured data representing the information available on the IDs. The new ID API extract the text and values of interest from IDs such as document number, last name, first name, date of expiration, country and more.&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_1-1614713996212.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260105i889A7DC374BC8FA0/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_1-1614713996212.png" alt="chril1_1-1614713996212.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 2 Pre-built ID model can extract information from passports and US drivers licenses&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Supervised table labeling and training, empty-value labeling&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;In addition the Form Recognizer &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011" target="_blank" rel="noopener"&gt;state of the art deep learning automatic table extraction capabilitie&lt;/A&gt;s it now also enables customer to train and label tables. This new release includes the ability to label line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained and documents are analyzed using this model the new line items will be extracted as part of the JSON output in the documentResults section.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_2-1614713996234.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260104i7EA32F5641870EC1/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_2-1614713996234.png" alt="chril1_2-1614713996234.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Figure 3 Label tables in your training dataset&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to labeling tables, you can now label empty values and regions; if some documents in your training set do not have values for some fields, you can use this so that your model will know to extract values properly from analyzed documents.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_3-1614713996261.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260106iE214516451A73BC4/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_3-1614713996261.png" alt="chril1_3-1614713996261.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Natural reading order, handwriting classification, and page selection&lt;/H2&gt;
&lt;P&gt;With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter to “natural” value for a more human-friendly reading order output as shown in the following example. Note the first column’s text lines output in order before listing the second, and the third, column.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_4-1614713996294.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260107i84780601C0EEF0A2/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_4-1614713996294.png" alt="chril1_4-1614713996294.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, for Latin languages, Form Recognizer will classify Latin-languages only text lines as handwritten style or not and give a confidence score, as seen below.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_5-1614713996397.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260108i151C506F95A6BAC2/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_5-1614713996397.png" alt="chril1_5-1614713996397.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_6-1614713996398.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260109i8241F1C3E8178E12/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_6-1614713996398.png" alt="chril1_6-1614713996398.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Furthermore, when analyzing a multi-page PDF or TIFF, you can now specify which pages you want to analyze.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Pre-built Receipt model quality improvements&lt;/H2&gt;
&lt;P&gt;This new update includes a number of quality improvements for the pre-built Receipt model, especially around line item extraction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Our Customers &amp;amp; Partners&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_0-1614714476433.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260159i3A74F9A570AD7DA6/image-size/small?v=v2&amp;amp;px=200" role="button" title="chril1_0-1614714476433.png" alt="chril1_0-1614714476433.png" /&gt;&lt;/span&gt;AvidXchange has developed an account payable automation solution leveraging Form Recognizer. “By partnering with Microsoft, we’re able to deliver an accounts payable automation solution for the middle market that’s truly powered by machine learning,” said Chris Tinsley, Chief Technology Officer at AvidXchange. “Our customers will benefit from faster invoice processing times and increased accuracy so we can help ensure their suppliers are paid the right amount, at the right time.”&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_1-1614714476450.png" style="width: 137px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260158i9C4223F271AD1BBF/image-dimensions/137x72?v=v2" width="137" height="72" role="button" title="chril1_1-1614714476450.png" alt="chril1_1-1614714476450.png" /&gt;&lt;/span&gt;WEX has developed a tool to process Explanation of Benefits documents using Form Recognizer. Matt Dallahan, Senior Vice President of Product Management and Strategy, said “The technology is truly amazing. I was initially worried that this type of solution would not be feasible, but I soon realized that the Form Recognizer can read virtually any document with accuracy.”&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_2-1614714476568.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260160i5B7B8BB596514014/image-size/large?v=v2&amp;amp;px=999" role="button" title="chril1_2-1614714476568.png" alt="chril1_2-1614714476568.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_3-1614714476570.png" style="width: 122px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260161i1EC2D3C6CC8F32AF/image-dimensions/122x31?v=v2" width="122" height="31" role="button" title="chril1_3-1614714476570.png" alt="chril1_3-1614714476570.png" /&gt;&lt;/span&gt;GEP has developed an invoice processing solution for a client using Form Recognizer. “At GEP, we are seeing AI and automation make a profound impact on procurement and the supply chain. By combining our AI solution with Microsoft Form Recognizer, we automated the processing of 4,000 invoices a day for a client, saving them tens of thousands of hours of manual effort, while improving accuracy, controls and compliance on a global scale,” said Sarateudu Sethi, GEP’s Vice President of Artificial Intelligence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chril1_4-1614714476596.png" style="width: 95px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260162i739B71AE5113A9D6/image-dimensions/95x45?v=v2" width="95" height="45" role="button" title="chril1_4-1614714476596.png" alt="chril1_4-1614714476596.png" /&gt;&lt;/span&gt;&amp;nbsp;“At Cross Masters, using cutting-edge AI technologies is not only a passion, it is an essential part of our work culture that requires continuous innovation. One of our latest success stories is automation of manual paperwork, required to process thousands of invoices. Thanks to Microsoft Form Recognizer’s AI engine we were able to develop a unique customized solution that provides to our clients market insights from large set of collected invoices. What we find the most convenient is human beating extraction quality and continuous introduction of new features, such as model composing or table labelling. This assures our client’s market advantage and helps our product to be the best-in-class solution” Jan Hornych, Head of Marketing Automation, Cross Masters&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Try out Form Recognizer&lt;/H1&gt;
&lt;P&gt;To get started with &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;, please login to the &lt;A href="https://azure.microsoft.com/en-us/features/azure-portal/" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt; to create a Form Recognizer resource. Once your resource is created, you can start exploring Form Recognizer, with the improvements mentioned above coming on March 15. You can learn more about Form Recognizer &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Mar 2021 23:59:38 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428</guid>
      <dc:creator>christina-lee</dc:creator>
      <dc:date>2021-03-04T23:59:38Z</dc:date>
    </item>
    <item>
      <title>Introducing semantic search: Bringing more meaningful results to Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/ba-p/2175636</link>
      <description>&lt;P&gt;A few years ago, it became clear to our team that AI could bring value to our customers, from improvements in ingestion to data exploration. We knew we had a lot of these valuable assets around Microsoft, so our team set out on a mission to bring as much “intelligence” as we could to the product then known as Azure Search.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the first phase of this mission, we took on "unsearchable" content; about 80% of business relevant data is in unstructured formats such as PDFs, PowerPoints, Word documents, JPEGs, CSVs, etc. We added AI powered enrichments to our ingestion process, enabling the ability to extract structure, insights and transform information from your data.&amp;nbsp; These capabilities were well received by our customers, culminating in a product rebrand as "&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_self"&gt;Azure Cognitive Search&lt;/A&gt;".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am happy to announce that in our continuation of this journey, we are bringing state of the art AI capabilities to the “head” of our product, the core search sub-system. In partnership with the Bing team, we have integrated their semantic search investments (100s of development years and millions of dollars in compute time) into our query infrastructure, effectively enabling any developer to leverage this investment over searchable content that you own and manage. We believe semantic search on Azure Cognitive Search offers the best combination of search relevance, developer experience, and cloud service capabilities available on the market.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This post explains what new capabilities are available to you and how you can get started today. I would also encourage you to look at the post called “&lt;A href="https://aka.ms/ScienceBehindSemanticSearchPost" target="_blank" rel="noopener"&gt;Bing’s AI behind semantic search&lt;/A&gt;” that goes deeper into the Bing technology that made semantic search possible.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are launching several exciting semantic search features in a public preview:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Semantic Ranking&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Customers have grown accustomed to using natural language queries in web search engines, but these queries usually do not perform as well when using a traditional keyword-based retrieval approach with ranking only based on term frequencies. To demonstrate this, consider what happens when a customer types a query like “&lt;EM&gt;how to add a user in Office&lt;/EM&gt;” in the Microsoft documentation. &amp;nbsp;For this purpose, we loaded all the Microsoft documentation dataset into Azure Cognitive Search so that we could compare the results between the default lexical based ranking algorithm and the semantic ranking algorithm.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Traditional retrieval and ranking approach&lt;/H3&gt;
&lt;P&gt;The default ranker (BM25) &lt;EM&gt;uses&lt;/EM&gt; words as discrete units and predicts relevance by using the frequencies of terms in the corpus. &amp;nbsp;BM25 works well when searching for keywords, but it struggles to find the most relevant documents when issuing a natural language query.&lt;/P&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="keyword-search.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259293iDD72049F2FB2DBA7/image-size/large?v=v2&amp;amp;px=999" role="button" title="keyword-search.png" alt="keyword-search.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note that the results do meet the lexical frequency requirements. For instance, inspecting the top document “&lt;A href="https://docs.microsoft.com/en-us/office/dev/add-ins/testing/testing-and-troubleshooting" target="_blank" rel="noopener"&gt;Troubleshoot user errors with Office Add-ins&lt;/A&gt;” shows that there are lot of mentions of terms like “office”, “user”, “add” and “how to” in the document – but unfortunately the article is not providing the information we meant to query for.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Semantics-based ranking&lt;/H3&gt;
&lt;P&gt;With the release of semantic search, now we can enable a ranking algorithm that will use deep neural networks to rank the articles based on how “meaningful” they are relative to the query. Internally, this is a ranker that is applied on top of the results returned by the BM25-based ranker.&amp;nbsp; Using semantic search capabilities, these are the top results for our query:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-ranking.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259297i7D3F92BD5FC051FD/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-ranking.png" alt="semantic-ranking.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;I read the content of the top-document called “&lt;A href="https://docs.microsoft.com/en-us/microsoft-365/admin/add-users/add-users?view=o365-worldwide" target="_blank" rel="noopener"&gt;Add users and assign licenses at the same time&lt;/A&gt;”, and it is clear that this is exactly the document I need! Semantic search made this connection even though the title and content are not syntactically close to my query.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Semantic Answers&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;In the previous example, the title of the document by itself did not make it very easy for me to catch if that was a relevant document or not. I still had to read it to find the snippet in the documentation that told me how to add a user to Office.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The good news is that now you can also get semantic answers! It is one of my favorite features; it uses an AI model that extracts relevant passages from the top documents, and then ranks them on their likelihood of being an answer to the query. If we find a passage that has a high likelihood of answering the question, we will promote it as a semantic answer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is what it looks like, in this case. Note that we even leveraged a model from Bing to provide highlights for the most relevant section in the semantic answer.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-answer.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260399iB2DAD3E58640B6E1/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-answer.png" alt="semantic-answer.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;STRONG style="color: inherit; font-family: inherit; font-size: 24px;"&gt;Semantic Captions&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Similarly, we can extract the most relevant section of each document returned so you can quickly skim through the results and see if they have the content that you care about; making it easier for you to triage the results briefly and go deeper into the ones that you think are relevant given your context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="semantic-caption2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260400i62B4FCC1B258B55B/image-size/large?v=v2&amp;amp;px=999" role="button" title="semantic-caption2.png" alt="semantic-caption2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started today!&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Using semantic search is easy. After you sign up for the preview at &lt;A href="http://aka.ms/semanticpreview" target="_blank" rel="noopener"&gt;http://aka.ms/semanticpreview&lt;/A&gt;, all you need to do is change your query parameters as part of the request as shown below. Note that there is no need to re-index any of your content!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;STRONG&gt;Query parameter&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;queryType&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “semantic” to indicate that you would like semantic ranking and answers.&lt;BR /&gt;Other values supported: “simple” and “full”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;searchFields&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Ordered list of fields that semantic ranking should be applied on. If you have a title or a short field that describes your document, we recommend that to be your first field.&amp;nbsp; Follow that by the url (if any), then the body of the document, and then any other relevant fields.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;queryLanguage&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;“en-us” is the only supported value today.&lt;BR /&gt;We will be adding more languages soon. Stay tuned.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;speller&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “lexicon” if you would like spell correction to occur on the query terms. Otherwise set to “none”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160"&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;answers&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="379"&gt;
&lt;P&gt;Set to “extractive” if you would like to get extractive answers. Otherwise set to “none”.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;&lt;STRONG&gt;Sample Query&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-preview     
{   
      "search": " Where was Alan Turing born?",   
      "queryType": "semantic", 
      "searchFields": "title,url,body", 
      "queryLanguage": "en-us", 
      "speller": "lexicon",
      "answers": "extractive"  
}   
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Sample Response&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "@search.answers": [
        {
            "key": "a1234",               
            "text": "Turing was born in Maida Vale, London, while his father, Julius…",
            "highlights": " Turing was born in &amp;lt;strong&amp;gt;Maida Vale, London&amp;lt;/strong&amp;gt; , while …",
            "score": 0.87802511
        }
    ],
    "value": [
        {
            "@search.score": 51.64714,
            "@search.rerankerScore": 1.9928148165345192,
            "@search.captions": [
                {
                    "text": " Alan Mathison Turing, (born June 23, 1912, 
                             London, England—died June 7, 1954…",
                    "highlights": " Alan Mathison Turing, (born June 23, 1912,
                             &amp;lt;strong/&amp;gt;London, England&amp;lt;/strong&amp;gt;—died June…",
                       }
            ],
            "id": "b5678",
            "body":  "…"
        },
        …  
    ]
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more about &lt;A href="https://aka.ms/SemanticMainPage" target="_blank" rel="noopener"&gt;Semantic Search in our documentation.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I am personally super excited about these new capabilities, the efficiencies that they will bring to you, and the progression of our vision to bring the best AI capabilities at Microsoft to Azure developers!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Luis Cabrera – on behalf of the Azure Cognitive Search team&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Customers &amp;amp; Partners&lt;/STRONG&gt;&lt;/H4&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90" style="width: 90px; border-style: none;"&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_10" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ppl.png" style="width: 183px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259286i60ABDBE97381C8AE/image-size/large?v=v2&amp;amp;px=999" role="button" title="ppl.png" alt="ppl.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="650" style="border-style: none;"&gt;
&lt;P&gt;&lt;A title="PPL Case Study" href="https://customers.microsoft.com/en-us/story/1344073022379788689-ppl-energy-azure" target="_blank" rel="noopener"&gt;Case Study&lt;/A&gt;:&amp;nbsp;&lt;EM&gt;PPL Electric Utilities Corporation, a utilities company, is working with Neudesic to create a web application with Azure Cognitive search to empower its field workers to find the most relevant information wherever they are.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90" style="border-style: none;"&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_11" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="howden.png" style="width: 213px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/259287iF2EBFD672B695BD5/image-size/large?v=v2&amp;amp;px=999" role="button" title="howden.png" alt="howden.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="650" style="border-style: none;"&gt;
&lt;P&gt;&lt;A title="Howden Case Study" href="https://customers.microsoft.com/en-us/story/1344058341075309890-howden-energy-azure-ai" target="_self"&gt;Case Study&lt;/A&gt;:&amp;nbsp;&lt;EM&gt;Howden teamed up with OrangeNXT to further improve Smart Records using key elements of their digitalNXT Search, a fully managed cloud solution powered by Azure Cognitive Search. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;Call to Action&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A href="http://aka.ms/semanticpreview" target="_blank" rel="noopener"&gt;Preview sign-up form&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/3-10-21-announcing-an-azure-cognitive-search-ama/m-p/2157224" target="_blank" rel="noopener"&gt;Cognitive Search Team Ask Me Anything (March 10 2021)&lt;/A&gt; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN&gt;&lt;STRONG&gt;Resources&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A href="https://aka.ms/SemanticMainPage" target="_blank" rel="noopener"&gt;Documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://aka.ms/SemanticSearchMechanicsVideo2" target="_blank" rel="noopener"&gt;Mechanics Video&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://aka.ms/ScienceBehindSemanticSearchPost" target="_blank" rel="noopener"&gt;Bing science behind semantic search&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Mar 2021 00:00:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/ba-p/2175636</guid>
      <dc:creator>Luis Cabrera-Cordon</dc:creator>
      <dc:date>2021-03-05T00:00:36Z</dc:date>
    </item>
    <item>
      <title>Ombromanie: Creating Hand Shadow stories with Azure Speech and TensorFlow.js Handposes</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/2166579</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_0-1613690550387.jpeg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255920iBE801D0624C8D1B8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_0-1613690550387.jpeg" alt="jelooper_0-1613690550387.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Even a tea light suffices to create a great effect&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_1-1613690550389.jpeg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255921i2CEDDF41FB98B141/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_1-1613690550389.jpeg" alt="jelooper_1-1613690550389.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;This TikTok creator has thousands of views for their handshadow tutorials&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;But what's a developer to do when trying to capture that #cottagecore vibe in a web app?&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#high-tech-for-the-cottage" target="_blank" rel="noopener" name="high-tech-for-the-cottage"&gt;&lt;/A&gt;High Tech for the Cottage&lt;/H2&gt;
&lt;P&gt;While exploring the art of hand shadows, I wondered whether some of the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/jlooper/posedance" target="_blank" rel="noopener"&gt;recent work&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class=" fluidvids"&gt;&lt;IFRAME src="https://www.youtube.com/embed/ZWvZBEeS4qQ" width="710" height="399" allowfullscreen="allowfullscreen" class=" fluidvids-elem" loading="lazy" data-mce-fragment="1"&gt;&lt;/IFRAME&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this!&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#a-show-of-hands" target="_blank" rel="noopener" name="a-show-of-hands"&gt;&lt;/A&gt;A Show Of Hands&lt;/H2&gt;
&lt;P&gt;When you start researching hand poses, it's striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_2-1613690550390.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255922iC341945EDADA45EE/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_2-1613690550390.png" alt="jelooper_2-1613690550390.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;MSR throwing hands&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;There are dozens of handpose libraries already on GitHub:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/topics/hand-tracking" target="_blank" rel="noopener"&gt;An entire GitHub topic on hand tracking&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/xinghaochen/awesome-hand-pose-estimation" target="_blank" rel="noopener"&gt;'Awesome' list for hand tracking&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://sites.google.com/view/hands2019/challenge" target="_blank" rel="noopener"&gt;Challenges and hackathons&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;There are many applications where tracking hands is a useful activity:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;• Gaming&lt;BR /&gt;• Simulations / Training&lt;BR /&gt;• "Hands free" uses for remote interactions with things by moving the body&lt;BR /&gt;• Assistive technologies&lt;BR /&gt;• TikTok effects :trophy:&lt;/img&gt;&lt;BR /&gt;• Useful things like&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://mcclanahoochie.com/accordionhands/" target="_blank" rel="noopener"&gt;Accordion Hands apps&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the more interesting new libraries,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://dev.to/midiblocks/introducing-handsfree-js-integrate-hand-face-and-pose-gestures-to-your-frontend-4g3p" target="_blank" rel="noopener"&gt;handsfree.js&lt;/A&gt;, offers an excellent array of demos in its effort to move to a hands free web experience:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_3-1613690550409.gif" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255925i68693FBE43C49072/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_3-1613690550409.gif" alt="jelooper_3-1613690550409.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Handsfree.js, a very promising project&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;As it turns out, hands are pretty complicated things. They&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;each&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;include 21 keypoints (vs PoseNet's 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_4-1613690550394.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255924i16174859EC1DA772/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_4-1613690550394.png" alt="jelooper_4-1613690550394.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js's handposes, and MediaPipe's. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe's handposes are perfect for our project. We will have to compromise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://github.com/tensorflow/tfjs-models/tree/master/handpose" target="_blank" rel="noopener"&gt;TensorFlow.js's handposes&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A href="https://google.github.io/mediapipe/solutions/hands" target="_blank" rel="noopener"&gt;MediaPipe's handpose models&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;One other library,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/andypotato/fingerpose" target="_blank" rel="noopener"&gt;fingerposes&lt;/A&gt;, is optimized for finger spelling in a sign language context and is worth a look.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Since it's more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's get to work to build this app.&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#scaffold-a-static-web-app" target="_blank" rel="noopener" name="scaffold-a-static-web-app"&gt;&lt;/A&gt;Scaffold a Static Web App&lt;/H2&gt;
&lt;P&gt;As a Vue.js developer, I always use the Vue CLI to scaffold an app using&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;vue create my-app&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;app&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and creating an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;api&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder to include an Azure function to store a key (more on this in a minute).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow-models/handpose&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^0.0.6&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-backend-cpu&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-backend-webgl&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-converter&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;@tensorflow/tfjs-core&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^2.7.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;SPAN class="p"&gt;...&lt;/SPAN&gt;
&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;microsoft-cognitiveservices-speech-sdk&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;^1.15.0&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#set-up-the-view" target="_blank" rel="noopener" name="set-up-the-view"&gt;&lt;/A&gt;Set up the View&lt;/H2&gt;
&lt;P&gt;We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight html"&gt;&lt;CODE&gt;&lt;SPAN class="nt"&gt;&amp;lt;div&lt;/SPAN&gt; &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"canvas-wrapper column is-half"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
&lt;SPAN class="nt"&gt;&amp;lt;canvas&lt;/SPAN&gt; &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"output"&lt;/SPAN&gt; &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"output"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&amp;lt;/canvas&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;video&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"video"&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"video"&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;playsinline&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;style=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"
          -webkit-transform: scaleX(-1);
           transform: scaleX(-1);
           visibility: hidden;
           width: auto;
           height: auto;
           position: absolute;
         "&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;gt;&amp;lt;/video&amp;gt;&lt;/SPAN&gt;
 &lt;SPAN class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/SPAN&gt;
 &lt;SPAN class="nt"&gt;&amp;lt;div&lt;/SPAN&gt; &lt;SPAN class="na"&gt;class=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"column is-half"&lt;/SPAN&gt;&lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;canvas&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;class=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"has-background-black-bis"&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;id=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"shadowCanvas"&lt;/SPAN&gt;
       &lt;SPAN class="na"&gt;ref=&lt;/SPAN&gt;&lt;SPAN class="s"&gt;"shadowCanvas"&lt;/SPAN&gt;
     &lt;SPAN class="nt"&gt;&amp;gt;&lt;/SPAN&gt;
    &lt;SPAN class="nt"&gt;&amp;lt;/canvas&amp;gt;&lt;/SPAN&gt;
&lt;SPAN class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#load-the-model-start-keyframe-input" target="_blank" rel="noopener" name="load-the-model-start-keyframe-input"&gt;&lt;/A&gt;Load the Model, Start Keyframe Input&lt;/H2&gt;
&lt;P&gt;Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video's keyframes for hand poses. It's important at these steps to ensure error handling in case the model fails to load or there's no webcam available.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;mounted&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;tf&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;setBackend&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;backend&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
    &lt;SPAN class="c1"&gt;//async load model, then load video, then pass it to start landmarking&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;model&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;handpose&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;load&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Model is loaded! Now loading video&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="k"&gt;try&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;loadVideo&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="k"&gt;catch&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;throw&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;

    &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;landmarksRealTime&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;webcam&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
  &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#setup-the-webcam" target="_blank" rel="noopener" name="setup-the-webcam"&gt;&lt;/A&gt;Setup the Webcam&lt;/H2&gt;
&lt;P&gt;Still working asynchronously, set up the camera to provide a stream of images&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;setupCamera&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;!&lt;/SPAN&gt;&lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt; &lt;SPAN class="o"&gt;||&lt;/SPAN&gt; &lt;SPAN class="o"&gt;!&lt;/SPAN&gt;&lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getUserMedia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;throw&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;Browser API navigator.mediaDevices.getUserMedia not available&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;navigator&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;mediaDevices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getUserMedia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;({&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;facingMode&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;user&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;VIDEO_WIDTH&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="na"&gt;height&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;VIDEO_HEIGHT&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;return&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Promise&lt;/SPAN&gt;&lt;SPAN class="p"&gt;((&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;resolve&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;srcObject&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;onloadedmetadata&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;resolve&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;};&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#design-a-hand-to-mirror-the-webcams" target="_blank" rel="noopener" name="design-a-hand-to-mirror-the-webcams"&gt;&lt;/A&gt;Design a Hand to Mirror the Webcam's&lt;/H2&gt;
&lt;P&gt;Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas - red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;landmarksRealTime&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="c1"&gt;//start showing landmarks&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//set up skeleton canvas&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;canvas&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;output&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//set up shadowCanvas&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;$refs&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;canvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getContext&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;2d&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getContext&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;2d&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="p"&gt;...&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//paint to main&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;clearRect&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; 
  &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;strokeStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;red&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fillStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;red&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;translate&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;scale&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;-&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//paint to shadow box&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;clearRect&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoWidth&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;videoHeight&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowColor&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;black&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowBlur&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowOffsetX&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;150&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowOffsetY&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;150&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;lineWidth&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;lineCap&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;round&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fillStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;white&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;strokeStyle&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;white&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;translate&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;width&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;scale&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="o"&gt;-&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="c1"&gt;//now you've set up the canvases, now you can frame its landmarks&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;frameLandmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#for-each-frame-draw-keypoints" target="_blank" rel="noopener" name="for-each-frame-draw-keypoints"&gt;&lt;/A&gt;For Each Frame, Draw Keypoints&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As the keyframes progress, the model predict new keypoints for each of the hand's elements, and both canvases are cleared and redrawn.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;model&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;estimateHands&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

      &lt;SPAN class="k"&gt;if&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;result&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;landmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawKeypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;result&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="nx"&gt;predictions&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;annotations&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;requestAnimationFrame&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;frameLandmarks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#draw-a-lifelike-hand" target="_blank" rel="noopener" name="draw-a-lifelike-hand"&gt;&lt;/A&gt;Draw a Lifelike Hand&lt;/H2&gt;
&lt;P&gt;Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand's coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Re-identify the fingers and palm:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;     &lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="nl"&gt;thumb&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;2&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;3&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;4&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;indexFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;5&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;6&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;7&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;8&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;middleFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;9&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;10&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;11&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;12&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;ringFinger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;13&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;14&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;15&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;16&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
        &lt;SPAN class="nx"&gt;pinky&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;17&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;18&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;19&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;20&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="nl"&gt;palm&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;5&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;9&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;13&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;17&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;1&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;...and draw them to screen:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;    &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Object&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;keys&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="o"&gt;++&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;finger&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fingers&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="p"&gt;];&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fingerLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;finger&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;map&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;keypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawPath&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;false&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;Object&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;keys&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;for&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="mi"&gt;0&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt; &lt;SPAN class="o"&gt;&amp;lt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;length&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="o"&gt;++&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palm&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;palmArea&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;i&lt;/SPAN&gt;&lt;SPAN class="p"&gt;];&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palmLookupIndices&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;palm&lt;/SPAN&gt;&lt;SPAN class="p"&gt;].&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;map&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;keypoints&lt;/SPAN&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;idx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;drawPath&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sctx&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;points&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story.&lt;/P&gt;
&lt;P&gt;To do this, get a key from the Azure portal for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/?WT.mc_id=academic-14261-cxa" target="_blank" rel="noopener"&gt;Speech Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;by creating a Service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_5-1613690550393.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255923iE817738507CDC464/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_5-1613690550393.png" alt="jelooper_5-1613690550393.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can connect to this service by importing the sdk:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;CODE&gt;import * as sdk from "microsoft-cognitiveservices-speech-sdk";&lt;/CODE&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;...and start audio transcription after obtaining an API key which is stored in an Azure function in the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;/api&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="k"&gt;async&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;startAudioTranscription&lt;/SPAN&gt;&lt;SPAN class="p"&gt;()&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;try&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="c1"&gt;//get the key&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;response&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;await&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;axios&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="kd"&gt;get&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;/api/getKey&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;subKey&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;response&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;data&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
        &lt;SPAN class="c1"&gt;//sdk&lt;/SPAN&gt;

        &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;speechConfig&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;SpeechConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fromSubscription&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;subKey&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
          &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;eastus&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="kd"&gt;let&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;audioConfig&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;AudioConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fromDefaultMicrophoneInput&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;sdk&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;SpeechRecognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;speechConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;audioConfig&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;

        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognized&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;s&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;result&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
          &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;story&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;push&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;text&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
        &lt;SPAN class="p"&gt;};&lt;/SPAN&gt;

        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recognizer&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;startContinuousRecognitionAsync&lt;/SPAN&gt;&lt;SPAN class="p"&gt;();&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="k"&gt;catch&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;message&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;error&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}&lt;/SPAN&gt;
    &lt;SPAN class="p"&gt;},&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end.&lt;/P&gt;
&lt;H2&gt;&lt;A class="anchor" href="https://dev.to/azure/ombromanie-playing-with-hand-shadows-with-tensorflow-js-199l-temp-slug-5854224?preview=4c3c69d5e60a2b25962c039bfb5da752d120c002d803cc8cba48b58139acf0beee9cfd5681297a3b3f4f605576b6756b270fd168e6d69a47dddc4936#display-the-story" target="_blank" rel="noopener" name="display-the-story"&gt;&lt;/A&gt;Display the Story&lt;/H2&gt;
&lt;P&gt;In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;&lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;shadowCanvas&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;captureStream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;60&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt; &lt;SPAN class="c1"&gt;// 60 FPS recording&lt;/SPAN&gt;
      &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;MediaRecorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;stream&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="na"&gt;mimeType&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt; &lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;video/webm;codecs=vp9&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;});&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;ondataavailable&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;)&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&amp;gt;&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;chunks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;push&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;e&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;data&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="p"&gt;}),&lt;/SPAN&gt;
        &lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;recorder&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;start&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="mi"&gt;500&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;...and displayed below as a video with the storyline in a new div:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="highlight js-code-highlight"&gt;
&lt;PRE class="highlight javascript"&gt;&lt;CODE&gt;      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;document&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;createElement&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;video&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;fullBlob&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="k"&gt;new&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;Blob&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="k"&gt;this&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;chunks&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="kd"&gt;const&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;downloadUrl&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nb"&gt;window&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;URL&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;createObjectURL&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;fullBlob&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;src&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="nx"&gt;downloadUrl&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="nb"&gt;document&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;getElementById&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;story&lt;/SPAN&gt;&lt;SPAN class="dl"&gt;"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;).&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;appendChild&lt;/SPAN&gt;&lt;SPAN class="p"&gt;(&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;);&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;autoplay&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
      &lt;SPAN class="nx"&gt;video&lt;/SPAN&gt;&lt;SPAN class="p"&gt;.&lt;/SPAN&gt;&lt;SPAN class="nx"&gt;controls&lt;/SPAN&gt; &lt;SPAN class="o"&gt;=&lt;/SPAN&gt; &lt;SPAN class="kc"&gt;true&lt;/SPAN&gt;&lt;SPAN class="p"&gt;;&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="highlight__panel js-actions-panel"&gt;
&lt;DIV class="highlight__panel-action js-fullscreen-code-action"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;This app can be deployed as an Azure Static Web App using the excellent&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/vscode-azurestaticwebapps" target="_blank" rel="noopener"&gt;Azure plugin for Visual Studio Code&lt;/A&gt;. And once it's live, you can tell durable shadow stories!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jelooper_6-1613690550392.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255926i4747DEF9FF5D67A3/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jelooper_6-1613690550392.png" alt="jelooper_6-1613690550392.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;Try Ombromanie&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/ombromanie" target="_blank" rel="noopener"&gt;here&lt;/A&gt;. The codebase is available&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/ombromanie-code" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Take a look at Ombromanie in action:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/HV__puO1Dco" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" data-mce-fragment="1"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=ca-14261-jelooper" target="_blank" rel="noopener"&gt;Learn more about AI on Azure&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://www.youtube.com/watch?v=h281NX568rU&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=4" target="_blank" rel="noopener"&gt;Azure AI Essentials Video covering speech and language&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://azure.microsoft.com/en-us/free/?OCID=AID3029145&amp;amp;WT.mc_id=ca-14261-jelooper" target="_blank" rel="noopener"&gt;Azure free account sign-up&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2021 19:09:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/2166579</guid>
      <dc:creator>jelooper</dc:creator>
      <dc:date>2021-02-25T19:09:52Z</dc:date>
    </item>
    <item>
      <title>Responsible Machine Learning with Error Analysis</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/responsible-machine-learning-with-error-analysis/ba-p/2141774</link>
      <description>&lt;DIV class="lia-message-subject-wrapper lia-component-subject lia-component-message-view-widget-subject-with-options"&gt;&lt;SPAN style="color: inherit; font-family: inherit; font-size: 24px;"&gt;Overview&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="lia-message-body-wrapper lia-component-message-view-widget-body"&gt;
&lt;DIV id="bodyDisplay" class="lia-message-body"&gt;
&lt;DIV class="lia-message-body-content"&gt;
&lt;P&gt;&lt;STRONG&gt;Website:&lt;/STRONG&gt; &lt;A href="http://erroranalysis.ai/" target="_blank" rel="noopener"&gt;ErrorAnalysis.ai&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Github repository:&lt;/STRONG&gt; &lt;A href="https://github.com/microsoft/responsible-ai-widgets/" target="_blank" rel="noopener"&gt;https://github.com/microsoft/responsible-ai-widgets/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Machine Learning (ML) teams who deploy models in the real world often face the challenges of conducting rigorous performance evaluation and testing for ML models. How often do we read claims such as “Model X is 90% on a given benchmark.” and wonder what does this claim mean for practical usage of the model? In practice, teams are well aware that model accuracy may not be uniform across subgroups of data and that there might exist input conditions for which the model fails more often. Often, such failures may cause direct consequences related to lack of reliability and safety, unfairness, or more broadly lack of trust in machine learning altogether. For instance, when a traffic sign detector does not operate well in certain daylight conditions or for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://interestingengineering.com/tesla-autopilot-mistakes-red-letters-on-flag-for-red-traffic-lights" target="_self" rel="nofollow noopener noreferrer"&gt;unexpected inputs&lt;/A&gt;, even though the overall accuracy of the model may be high, it is still important for the development team to know ahead of time about the fact that the model may not be as reliable in such situations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="besmiranushi_0-1613538210604.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255440i28671D47179C4A7D/image-size/large?v=v2&amp;amp;px=999" role="button" title="besmiranushi_0-1613538210604.png" alt="besmiranushi_0-1613538210604.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 1&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- Error Analysis moves away from aggregate accuracy metrics, exposes the distribution of errors to developers in a transparent way, and enables them to identify &amp;amp; diagnose errors efficiently.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;While there exist several problems with current model assessment practices, one of the most obvious is the usage of aggregate metrics to score models on a whole benchmark. It is difficult to convey a detailed story on model behavior with a single number and yet most of the research and leaderboards operate on single scores. At the same time, there may exist several dimensions of the input feature space that a practitioner may be interested in taking a deep dive and ask questions such as “What happens to the accuracy of the recognition model in a self-driving car when it is dark and snowing outside?” or “Does the loan approval model perform similarly for population cohorts across ethnicity, gender, age, and education?”. Navigating the terrain of failures along multiple potential dimensions like the above can be challenging. In addition, in the longer term, when models are updated and re-deployed frequently upon new data evidence or scientific progress, teams also need to continuously track and monitor model behavior so that&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/research/blog/creating-better-ai-partners-a-case-for-backward-compatibility/" target="_self" rel="noopener noreferrer"&gt;updates do not introduce new mistakes and therefore break user trust&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To address these problems, practitioners often have to create custom infrastructure, which is tedious and time-consuming. To accelerate rigorous ML development, in this blog you will learn how to use the Error Analysis tool for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Getting a deep understanding of how failure is distributed for a model.&lt;/LI&gt;
&lt;LI&gt;Debugging ML errors with active data exploration and interpretability techniques.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The Error Analysis toolkit is integrated within the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/responsible-ai-widgets" target="_self" rel="noopener noreferrer"&gt;Responsible AI Widgets&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;OSS repository, our starting point to provide a set of integrated tools to the open source community and ML practitioners. Not only a contribution to the OSS RAI community, but practitioners can also leverage these assessment tools in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_self" rel="noopener noreferrer"&gt;Azure Machine Learning&lt;/A&gt;, including&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://fairlearn.github.io/" target="_self" rel="nofollow noopener noreferrer"&gt;Fairlearn&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&amp;amp;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://interpret.ml/" target="_self" rel="nofollow noopener noreferrer"&gt;InterpretML&lt;/A&gt;&amp;nbsp;and now Error Analysis in mid 2021.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are interested in learning more about training model updates that remain backward compatible with their previous selves by minimizing regress and new errors, you can also check out our most recent open source library and tool&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/BackwardCompatibilityML/" target="_blank" rel="noopener noreferrer"&gt;BackwardCompatibilityML&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1962279027"&gt;Prerequisites&lt;/H2&gt;
&lt;P&gt;To install the Responsible AI Widgets “raiwidgets” package, in your python environment simply run the following to install the raiwidgets package from &lt;A href="https://pypi.org/project/raiwidgets/" target="_blank" rel="noopener"&gt;pypi&lt;/A&gt;. If you do not have interpret-community already installed, you will also need to install this for supporting the generation of model explanations.&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install interpret-community
pip install raiwidgets&lt;/LI-CODE&gt;
&lt;P&gt;Alternatively, you can also clone the open source repository and build the code from scratch:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;git clone https://github.com/microsoft/responsible-ai-widgets.git&lt;/LI-CODE&gt;
&lt;P&gt;You will need to install yarn and node to build the visualization code, and then you can run:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;yarn install
yarn buildall&lt;/LI-CODE&gt;
&lt;P&gt;And install from the raiwidgets folder locally:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;cd raiwidgets
pip install –e .&lt;/LI-CODE&gt;
&lt;P&gt;For more information see the &lt;A href="https://github.com/microsoft/responsible-ai-widgets/blob/main/CONTRIBUTING.md" target="_blank" rel="noopener"&gt;contributing guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;If you intend to run repository tests, in the raiwidgets folder of the repository run:&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;pip install -r requirements.txt&lt;/LI-CODE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="toc-hId-154824564"&gt;Getting started&lt;/H2&gt;
&lt;P&gt;This post illustrates the Error Analysis tool by using a binary classification task on income prediction (&amp;gt;50K, &amp;lt;50K). The model under inspection will be trained using the tabular&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://archive.ics.uci.edu/ml/datasets/Census+Income" target="_blank" rel="noopener nofollow noreferrer"&gt;UCI Census Income dataset&lt;/A&gt;, which contains both numerical and categorical features such as age, education, number of working hours, ethnicity, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We can call the error analysis dashboard using the API below, which takes in an explanation object computed by one of the explainers from the interpret-community repository, the model or pipeline, a dataset and the corresponding labels (true_y parameter):&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;ErrorAnalysisDashboard(global_explanation, model, dataset=x_test, true_y=y_test)&lt;/LI-CODE&gt;
&lt;P&gt;For larger datasets, we can downsample the explanation to fewer rows but run error analysis on the full dataset.&amp;nbsp; We can provide the downsampled explanation, the model or pipeline, the full dataset, and then both the labels for the sampled explanation and the full dataset, as well as (optionally) the names of the categorical features:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;ErrorAnalysisDashboard(global_explanation, model, dataset=X_test_original_full,true_y=y_test, categorical_features=categorical_features, true_y_dataset=y_test_full)&lt;/LI-CODE&gt;
&lt;P&gt;All screenshots below are generated using a LGBMClassifier with three estimators. You can directly run this example using the &lt;A href="https://github.com/microsoft/responsible-ai-widgets/tree/main/notebooks" target="_self"&gt;jupyter notebooks in our repository&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 id="toc-hId--1652629899"&gt;How Error Analysis works&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-834882934"&gt;1. Identification&lt;/H2&gt;
&lt;P&gt;Error Analysis starts with identifying the cohorts of data with a higher error rate versus the overall benchmark error rate. The dashboard allows for error exploration by using either an error heatmap or a decision tree guided by errors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Error Heatmap for Error Identification&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The view slices the data based on a one- or two-dimensional grid of input features. Users can choose the input features of interest for analysis. The heatmap visualizes cells with higher error with a darker red color to bring the user’s attention to regions with high error discrepancy. This is beneficial especially when the error themes are different in different partitions, which happens frequently in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failure.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="heatmap.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255447iA751BBC3C7FE1F8D/image-size/large?v=v2&amp;amp;px=999" role="button" title="heatmap.png" alt="heatmap.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 2&lt;/STRONG&gt;&amp;nbsp;-&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;While the overall error rate for the dataset is 23.65%, the heatmap reveals that the error rates are visibly higher, up to 83%, for individuals with higher education. Error rates are also higher for males vs. females.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Decision Tree for Error Identification&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Very often, error patterns may be complex and involve more than one or two features. Therefore, it may be difficult for developers to explore all possible combinations of features to discover hidden data pockets with critical failure. To alleviate the burden, the binary tree visualization automatically partitions the benchmark data into interpretable subgroups, which have unexpectedly high or low error rates. In other words, the tree leverages the input features to maximally separate model error from success. For each node defining a data subgroup, users can investigate the following information:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Error rate&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- a portion of instances in the node for which the model is incorrect. This is shown through the intensity of the red color.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error coverage&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;– a portion of all errors that fall into the node. This is shown through the fill rate of the node.&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data representation&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;- number of instances in the node. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.&lt;/LI&gt;
&lt;/UL&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="tree.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255448iD3C9D47644F86366/image-size/large?v=v2&amp;amp;px=999" role="button" title="tree.png" alt="tree.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 3&lt;/STRONG&gt;&amp;nbsp;– Decision tree that aims at finding failure modes by separating error instances from success instances in the data. The hierarchical error pattern here shows that while the overall error rate is 23.65% for the dataset, it can be as high as 96.77% for individuals who are married, have a capital gain higher than 4401, and a number of education years higher than 12.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Cohort definition and manipulation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To specialize the analysis and allow for deep dives, both error identification views can be generated for any data cohort and not only for the whole benchmark. Cohorts are subgroups of data that the user may choose to save for later use if they wish to come back to those cohorts for future investigation. They can be defined and manipulated interactively either from the heatmap or the tree. They can also be carried over to the next diagnostical views on data exploration and model explanations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cohort manipulation.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255449i286BAA42FA7B9F0C/image-size/large?v=v2&amp;amp;px=999" role="button" title="cohort manipulation.png" alt="cohort manipulation.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 4&lt;/STRONG&gt;&amp;nbsp;- Creating a new cohort for further investigation that focuses on individuals who are married and have capital gain lower than 4401.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--972571529"&gt;2. Diagnosis&lt;/H2&gt;
&lt;P&gt;After identifying cohorts with higher error rates, Error Analysis enables debugging and exploring these cohorts further. It is then possible to gain deeper insights about the model or the data through data exploration and model interpretability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-1514941304"&gt;Debugging the data&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Data Explorer&lt;/STRONG&gt;: Users can explore dataset statistics and distributions by selecting different features and estimators along the two axes of the data explorer. They can further compare the subgroup data stats with other subgroups or the overall benchmark data. This view can for instance uncover if certain cohorts are underrepresented or if their feature distribution is significantly different from the overall data, hinting therefore to the potential existence of outliers or unusual covariate shift.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="data explorer.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255450i97C20B813B8E18D8/image-size/large?v=v2&amp;amp;px=999" role="button" title="data explorer.png" alt="data explorer.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Figure 5&lt;/STRONG&gt;&amp;nbsp;-&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;In figure 1 and 2, we discovered that for individuals with a higher number of education years, the model has higher failure rates. When we look at how the data is distributed across the feature “education_num” we can see that a) there are fewer instances for individuals with more than 12 years of education, and b) for this cohort the distribution between lower income (&lt;FONT color="#3366FF"&gt;&lt;STRONG&gt;blue&lt;/STRONG&gt;&lt;/FONT&gt;) and higher income (&lt;FONT color="#FF6600"&gt;&lt;STRONG&gt;orange&lt;/STRONG&gt;&lt;/FONT&gt;) is very different than for other cohorts. In fact, for this cohort there exist more people who have an income higher than 50K, which is not true for the overall data.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Instance views&lt;/STRONG&gt;: Beyond data statistics, sometimes it is useful to merely just observe the raw data along with labels in a tabular or tile form. Instance views provide this functionality and divide the instances into correct and incorrect tabs. By eyeballing the data, the developer can identify potential issues related to missing features or label noise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--292513159"&gt;Debugging the model&lt;/H2&gt;
&lt;P&gt;Model interpretability is a powerful means for extracting knowledge on how a model works. To extract this knowledge, Error Analysis relies on Microsoft’s&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/interpretml" target="_blank" rel="noopener noreferrer"&gt;InterpretML&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;dashboard and library. The library is a prominent contribution in ML interpretability lead by Rich Caruana, Paul Koch, Harsha Nori, and Sam Jenkins. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Global explanations&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Feature Importance&lt;/STRONG&gt;: Users can explore the top K important features that impact the overall model predictions (a.k.a. global explanation) for a selected subgroup of data or cohort. They can also compare feature importance values for different cohorts side by side. The information on feature importance or the ordering is useful for understanding whether the model is leveraging features that are necessary for the prediction or whether it is relying on spurious correlations. By contrasting explanations that are specific to the cohort with those for the whole benchmark, it is possible to understand whether the model behaves differently or in an unusual way for the selected cohort.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Dependence Plot&lt;/STRONG&gt;: Users can see the relationship between the values of the selected feature to its corresponding feature importance values. This shows them how values of the selected feature impact model prediction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="global explanations.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255451i12F5306D2F532C39/image-size/large?v=v2&amp;amp;px=999" role="button" title="global explanations.png" alt="global explanations.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 6&lt;/STRONG&gt;&amp;nbsp;- Global feature explanations for the income prediction model show that marital status and number of education years are the most important features globally. By clicking on each feature, it is possible to observe more granular dependencies. For example, marital statuses like “divorced”, “never married”, “separated”, or “widowed” contribute to model predictions for lower income (&amp;lt;50K). Marital status of “civil spouse” instead contributes to model predictions for higher income (&amp;gt;50K).&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Local explanations&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Global explanations approximate the overall model behavior. For focusing the debugging process on a given data instance, users can select any individual data points (with correct or incorrect predictions) from the tabular instance view to explore their local feature importance values (local explanation) and individual conditional expectation (ICE) plots.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Local Feature Importance&lt;/STRONG&gt;: Users can investigate the top K (configurable K) important features for an individual prediction. Helps illustrate the local behavior of the underlying model on a specific data point.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Individual Conditional Expectation&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;(ICE)&lt;/STRONG&gt;: Users can investigate how changing a feature value from a minimum value to a maximum value impacts the prediction on the selected data instance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Perturbation Exploration (what-if analysis)&lt;/STRONG&gt;: Users can apply changes to feature values of the selected data point and observe resulting changes to the prediction. They can save their hypothetical what-if data points for further comparisons with other what-if or original data points.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="local explanation what if.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255452i4FAEA0561185300D/image-size/large?v=v2&amp;amp;px=999" role="button" title="local explanation what if.png" alt="local explanation what if.png" /&gt;&lt;/span&gt;&lt;BR /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 7&lt;/STRONG&gt;&amp;nbsp;- For this individual, the model outputs a wrong prediction, predicting that the individual earns less than 50K, while the opposite is true. With what-if explanations, it is possible to understand how the model would behave if one of the feature values changes. For instance, here we can see that if the individual were 10 years older (age changed from 32 to 42) the model would have made a correct prediction. While in the real world many of these features are not mutable, this sensitivity analysis is intended to further support practitioners with model understanding capabilities.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId--2099967622"&gt;Other relevant tools&lt;/H2&gt;
&lt;P&gt;Error Analysis enables practitioners to identify and diagnose error patterns. The integration with model interpretability techniques testifies to the joint power of providing such tools together as part of the same platform. We are actively working towards integrating further considerations into the model assessment experience such as fairness and inclusion (via&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://fairlearn.github.io/" target="_self" rel="nofollow noopener noreferrer"&gt;FairLearn&lt;/A&gt;) as well as backward compatibility during updates (via&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/BackwardCompatibilityML" target="_self" rel="noopener noreferrer"&gt;BackwardCompatibilityML&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 id="toc-hId-387545211"&gt;Our team&lt;/H2&gt;
&lt;P&gt;The initial work on error analysis started with research investigations on methodologies for in-depth understanding and explanation of Machine Learning failures.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://besmiranushi.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Besmira Nushi&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.ecekamar.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Ece Kamar&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://www.erichorvitz.com/" target="_blank" rel="noopener nofollow noreferrer"&gt;Eric Horvitz&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;at Microsoft Research are leading these efforts and continue to innovate with new techniques for debugging ML models. In the past year, our team was extended via a collaboration with the RAI tooling team in the Azure Machine Learning group as well as the Analysis Platform team in Microsoft Mixed Reality. The Analysis Platform team has invested several years of engineering work in building internal infrastructure and now we are making these efforts available to the community as open source as part of the Azure Machine Learning ecosystem. The RAI tooling team consists of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/imatiach/" target="_blank" rel="noopener nofollow noreferrer"&gt;Ilya Matiach&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://cs-people.bu.edu/sameki/" target="_blank" rel="noopener nofollow noreferrer"&gt;Mehrnoosh Sameki&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/romanlutz/" target="_blank" rel="noopener nofollow noreferrer"&gt;Roman Lutz&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/richard-edgar-48aa0613/" target="_blank" rel="noopener nofollow noreferrer"&gt;Richard Edgar&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/hyemisong/" target="_blank" rel="noopener nofollow noreferrer"&gt;Hyemi Song&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/minsoothigpen/" target="_blank" rel="noopener nofollow noreferrer"&gt;Minsoo Thigpen&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/anupshirgaonkar/" target="_blank" rel="noopener nofollow noreferrer"&gt;Anup Shirgaonkar&lt;/A&gt;. They are passionate about democratizing Responsible AI and have several years of experience in shipping such tools for the community with previous examples on FairLearn, InterpretML Dashboard etc. We also received generous help and expertise along the way from our partners at Microsoft Aether Committee and Microsoft Mixed Reality:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://linkedin.com/in/parham-mohadjer-09365b96/" target="_blank" rel="noopener nofollow noreferrer"&gt;Parham Mohadjer&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://linkedin.com/in/paulbkoch/" target="_blank" rel="noopener nofollow noreferrer"&gt;Paul Koch&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/praphat-xavier-fernandes-86574814/" target="_blank" rel="noopener nofollow noreferrer"&gt;Xavier Fernandes&lt;/A&gt;, and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/juanlema/" target="_blank" rel="noopener nofollow noreferrer"&gt;Juan Lema&lt;/A&gt;. All marketing initiatives, including the presentation of this blog, were coordinated by&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.linkedin.com/in/thuylnguyen/" target="_blank" rel="noopener nofollow noreferrer"&gt;Thuy Nguyen&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Big thanks to everyone who made this possible!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Related research&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure&lt;/STRONG&gt;. Besmira Nushi, Ece Kamar, Eric Horvitz; HCOMP 2018. &lt;A href="https://www.microsoft.com/en-us/research/publication/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Software Engineering for Machine Learning: A Case Study&lt;/STRONG&gt;.&amp;nbsp;Saleema Amershi, Andrew Begel, Christian Bird, Rob DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, Thomas Zimmermann; ICSE 2019. &lt;A href="https://www.microsoft.com/en-us/research/publication/software-engineering-for-machine-learning-a-case-study/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff&lt;/STRONG&gt;.&amp;nbsp;Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, Eric Horvitz; AAAI 2019. &lt;A href="https://www.microsoft.com/en-us/research/publication/updates-in-human-ai-teams-understanding-and-addressing-the-performance-compatibility-tradeoff/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;An Empirical Analysis of Backward Compatibility in Machine Learning Systems&lt;/STRONG&gt;. Megha Srivastava, Besmira Nushi, Ece Kamar, Shital Shah, Eric Horvitz; KDD 2020. &lt;A href="https://www.microsoft.com/en-us/research/publication/an-empirical-analysis-of-backward-compatibility-in-machine-learning-systems/" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Understanding Failures of Deep Networks via Robust Feature Extraction&lt;/STRONG&gt;. Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, Eric Horvitz. arXiv 2020. &lt;A href="https://arxiv.org/abs/2012.01750" target="_blank" rel="noopener"&gt;pdf&lt;/A&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 18 Feb 2021 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/responsible-machine-learning-with-error-analysis/ba-p/2141774</guid>
      <dc:creator>besmiranushi</dc:creator>
      <dc:date>2021-02-18T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Translator announces Document Translation (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185</link>
      <description>&lt;P&gt;We are announcing Document Translation, a new feature in Azure Translator service which enables enterprises, translation agencies, and consumers who require volumes of complex documents to be translated into one or more languages preserving structure and format in the original document. Document Translation is an asynchronous batch feature offering translation of large documents eliminating limits on input text size. It supports documents with rich content in different file formats including Text, HTML, Word, Excel, PowerPoint, Outlook Message, PDF, etc. It reconstructs translated documents preserving layout and format as present in the source.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DocTransImage (1).png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/255600i94D33232A7D04C97/image-size/large?v=v2&amp;amp;px=999" role="button" title="DocTransImage (1).png" alt="DocTransImage (1).png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Standard translation offerings in the market accept only plain text, or HTML, and limits count of characters in a request. Users translating large documents must parse the documents to extract text, split them into smaller sections and translate them separately. If sentences are split in an unnatural breakpoint it can lose the context resulting in suboptimal translations. Upon receipt of the translation results, the customer has to merge the translated pieces into the translated document. This involves keeping track of which translated piece corresponds to the equivalent section in the original document.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;The problem gets complicated when customers want to translate complex documents having rich content. They convert the original file in variety of formats to either .html or .txt file format and reconvert translated content from html or txt files into original document file format. The transformation may result in various issues. The problem gets compounded when customer needs to translate a) large quantity of documents, b) documents in variety of file formats, c) documents while preserving the original layout and format, d) documents into multiple target languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Document Translation is an asynchronous offering to which the user makes a request specifying location of source and target documents, and the list of target output languages. Document Translation returns a job identifier enabling the user to track the status of the translation. Asynchronously, Document Translation pulls each document from the source location, recognizes the document format, applies right parsing technique to extract textual content in the document, translates the textual content into target languages. It then reconstructs the translated document preserving layout and format as present in the source documents, and stores translated document in a specified location. Document Translation updates the status of translation at the document level. Document Translation makes it easy for the customer to translate volumes of large documents in a variety of document formats, into a list of target languages thus eliminating all the challenges customers face today and improving their productivity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Document Translation enables users to customize translation of documents by providing custom glossaries, a custom translation model id built using &lt;A href="https://portal.customtranslator.azure.ai/" target="_self"&gt;customer translator&lt;/A&gt;, or both as part of the request. Such customization retains specific terminologies and provides domain specific translations in translated documents.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;“Translation of documents with rich formatting is a tricky business. We need the translation to be fluent and matching the context, while maintaining high fidelity in the visual appearance of complex documents. Document Translation is designed to address those goals, relieving client applications from having to disassemble and reassemble the documents after translation, making it easy for developers to build workflows that process full documents with a few simple steps.”, said Chris Wendt, Principal Program Manager.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more about Translator and the Document Translation feature in the video below&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=ZKkoaV1dGew" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/ZKkoaV1dGew/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;References&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/overview" target="_self"&gt;User documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/cognitive-services/translator/" target="_self"&gt;Pricing&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Send your feedback to &lt;A href="mailto:translator@microsoft.com" target="_blank" rel="noopener"&gt;translator@microsoft.com&lt;/A&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Feb 2021 21:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185</guid>
      <dc:creator>Krishna_Doss</dc:creator>
      <dc:date>2021-02-17T21:30:00Z</dc:date>
    </item>
    <item>
      <title>Hello, bot! Conversational AI on Microsoft Platform</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/hello-bot-conversational-ai-on-microsoft-platform/ba-p/2139570</link>
      <description>&lt;DIV&gt;During the pandemic, we all found ourselves in isolation, and relying more and more on effective electronic means of communication. The amount of digital conversations increased dramatically, and we need to rely on bots to help us handle some of those conversations. In this blog post, I give brief overview of conversational AI on Microsoft platform and show you how to build a simple educational bot to help students learn.&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If you prefer video content, here is a great video to get you started, from our &lt;A href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;AI Developer Resources&lt;/A&gt; page:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;BR /&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=Nh3S_sljkpI" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/Nh3S_sljkpI/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H1 id="do-we-need-bots-and-when"&gt;Do We Need Bots, and When?&lt;/H1&gt;
&lt;P&gt;Many people believe that in the future we will be interacting with computers using speech, in the same way we interact between each other. While the future is still vague, we can already benefit from conversational interfaces in many areas, for example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In user support, which has traditionally been based on interpersonal communication, automated chat-bots can solve a lot of routine problems for the users, leaving human specialists for solving only unusual cases.&lt;/LI&gt;
&lt;LI&gt;During surgical operation, when hands-free interaction is essential. From personal experience, I find it personally more convenient to set morning alarm and “good night” music through voice assistant before going to sleep.&lt;/LI&gt;
&lt;LI&gt;Automating some functions in interpersonal communication. My favorite example is a chat-bot that you can add to a group chat when organizing a party, and it will track how much money each of the participants spent on the preparation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;At the current state of development of conversational AI technologies, a chat bot will not replace a human, and it will not pass Turing test.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;&lt;FONT size="4"&gt;In practice, chat bots act as an advanced version of a command line, in which you do not need to know exact commands to perform an action.&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Thus, successful bot applications will not try to pretend to be a human, because such behavior is likely to cause some user dissatisfaction in the future. It is one of the &lt;A href="https://www.microsoft.com/ai/ai-lab-conversational-ai?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;responsible conversational AI principles&lt;/A&gt;, which you need to consider when designing a bot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="educational-bots"&gt;Educational Bots&lt;/H1&gt;
&lt;P&gt;During the pandemic, one of the areas that is being transformed the most is education. We can envision educational bots that help student answer most common questions, or act as a virtual teaching assistant. In this blog post, I will show you how to create a simple assistant bot that will be able to handle several questions from the field of Geography.&lt;/P&gt;
&lt;P&gt;Before we jump to this task, let’s talk about Microsoft conversational AI stack in general, and consider different development options.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="conversational-ai-development-stack"&gt;Conversational AI Development Stack&lt;/H1&gt;
&lt;P&gt;When it comes to conversational AI, we can logically think of a conversational agent having two main parts:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Conversational interface&lt;/STRONG&gt; handles passing messages from the user to the bot and back. It takes care about communication between user messaging agent (such as Microsoft Teams, Skype or Telegram) and our application logic, and includes code to handle request-response logic.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Intelligent backend&lt;/STRONG&gt; adds some AI functionality to your bot, such are recognizing user’s phrases, or finding best possible answer.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A bot can exist without any intelligent backend, but it would not be smart. Still, bots like that are useful for automating simple tasks, such as form filling, or handling some pre-defined workflow.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IMG src="http://soshnikov.com/images/blog/bots-arch.png" border="0" alt="Bots Architecture" width="592" height="359" /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here I present slightly simplified view of the whole &lt;A href="https://github.com/microsoft/botframework-sdk#bot-framework-ecosystem" target="_blank" rel="noopener"&gt;Bot Ecosystem&lt;/A&gt;, but this way it is easier to get the picture.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conversational Interface: Microsoft Bot Framework and Azure Bot Service&lt;/H2&gt;
&lt;P&gt;At the heart of conversational interface is &lt;A href="https://dev.botframework.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Microsoft Bot Framework&lt;/A&gt; - an open-source development framework (with source code available &lt;A href="https://github.com/microsoft/botframework-sdk" target="_blank" rel="noopener"&gt;on GitHub&lt;/A&gt;), which contains useful abstractions for bot development. The main idea of Bot Framework is to abstract communication channel, and develop bots as web endpoints that asynchronously handle request-response communication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;Decoupling of bot logic and communication channel allows you to develop bot code once, and then connect it easily to different platforms, such as Skype, Teams or Telegram. Omnichannel bots are now made simple!&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Bot Framework SDK supports primarily C#, Node.js, Python and Java, although C# or Node.js are highly recommended.&lt;/P&gt;
&lt;P&gt;To host bots developed with Bot Framework on Azure, you use &lt;A href="https://azure.microsoft.com/services/bot-services/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt;. It hosts bot logic itself (either as web application, or azure function), as well as allows you to declaratively define physical channels that your bot will be connected to. You can &lt;A href="https://docs.microsoft.com/azure/bot-service/bot-service-manage-channels?view=azure-bot-service-4.0&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;connect your bot to Skype or Telegram through Azure Portal&lt;/A&gt; with a few simple steps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Intelligent Backend: LUIS and QnA Maker&lt;/H2&gt;
&lt;P&gt;Many modern bots support some form of natural language interaction. To do it, the bot needs to understand the user’s phrase, which is typically done through &lt;STRONG&gt;intent classification&lt;/STRONG&gt;. We define a number of possible &lt;STRONG&gt;intents&lt;/STRONG&gt; or actions that the bot can support, and then map an input phrase to one of the intents.&lt;/P&gt;
&lt;P&gt;This mapping is typically done using a neural network trained on some dataset of sample phrases. To take away the complexity of training your own neural network model, Microsoft provides &lt;STRONG&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Language Understanding Intelligent Service&lt;/A&gt;&lt;/STRONG&gt;, or LUIS, which allows you to train a model either &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-how-to-start-new-app/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;through web interface&lt;/A&gt;, or &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-tutorial-node-import-utterances-csv/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;an API&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-utteranceintentmapping.png" border="0" alt="Bot Utterance-Intent Mapping" width="468" height="188" /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to intent classification, LUIS also performs &lt;STRONG&gt;named entity recognition&lt;/STRONG&gt; (or &lt;A href="https://en.wikipedia.org/wiki/Named-entity_recognition" target="_blank" rel="noopener"&gt;NER&lt;/A&gt;). It can automatically extract some entities of well-known types, such as geolocations or references to date and time, and can learn to extract some user-defined entities as well.&lt;/P&gt;
&lt;P&gt;Having entities extracted and intent correctly determined it should be much easier to program the logic of your bot. This is often done using &lt;STRONG&gt;slot filling&lt;/STRONG&gt; technique: extracted entities from the user’s input populate some slots in a dictionary, and if some more values are required to perform the task - additional dialog is initiated to ask additional info from the user.&lt;/P&gt;
&lt;P&gt;Another type of bot behavior that often comes up is the ability to find best matching phrase or piece of information in some table, i.e. do an &lt;STRONG&gt;intelligent lookup&lt;/STRONG&gt;. It is useful if you want to provide FAQ-style bot that can answer user’s questions based on some database of answers, or if you just want to program chit-chat behavior with some common responses. To implement this functionality, you can use &lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;QnA Maker&lt;/A&gt; - a complex service, that encapsulates &lt;A href="https://docs.microsoft.com/azure/search/search-what-is-azure-search/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt;, and provides simple way to build question-answering functionality. You can index any existing FAQ document, or provide question-answer pairs through the web interface, and then hook up QnA maker to your bot with a few lines of code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bot Development: Composer, Power Virtual Agents or Code?&lt;/H2&gt;
&lt;P&gt;As I mentioned above, bots can be developed using your favorite programming language. However, this approach requires you to write some boilerplate code, understand asynchronous calls, and therefore has a significant learning curve. There are some simpler options that are good for a start!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;It is recommended to start developing your bot using low-code approach through &lt;A href="https://docs.microsoft.com/composer/introduction?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Bot Framework Composer&lt;/A&gt; - an interactive visual tool that allows you to design your bot by drawing dialog diagrams.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Composer integrates LUIS and QnA maker out of the box, so that you do not need to train those services through web interface first, and then worry about integrating them to your bot. From the same UI, you are able to specify events triggered by some user phrases, and dialogs that respond to them.&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-overview-image.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-overview-image.png" border="0" alt="Bot Framework Composer Main UI" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Another similar low-code option would be to use &lt;A href="https://powervirtualagents.microsoft.com/blog/how-to-use-conversational-ai-to-enhance-engagement/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Power Virtual Agents&lt;/A&gt; (PVA), a tool from &lt;A href="https://powerplatform.microsoft.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Power Platform&lt;/A&gt; family of tools for business automation. It would be especially useful if you are already familiar with Power Platform, and using any of its tools to enhance productivity. In this case, PWA will be a natural choice, and it will integrate nicely into all your data points and business processes. In short - Composer is a great low-code tool for developers, while PVA is more for business users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="getting-started-with-bot-development"&gt;Getting Started with Bot Development&lt;/H1&gt;
&lt;P&gt;Let me show you how we can start the development of a simple educational bot that will help K-12 students with their geography classes.&amp;nbsp;&lt;SPAN&gt;We will develop a simple bot, which you can later host on Microsoft Azure and connect to most popular communication channels, such as Teams, Slack or Telegram. If you do not have an Azure account, you can&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;get a free trial&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;(or&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/students/?WT.mc_id=ca-13976-dmitryso&amp;amp;OCID=AID3029145" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN&gt;, if you are a student).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;To begin with, we will implement three simple functions in our bot:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Being able to tell a capital city for a country (&lt;EM&gt;What is the capital of Russia?&lt;/EM&gt;).&lt;/LI&gt;
&lt;LI&gt;Giving definitions of most useful terms, eg. answering a questions like &lt;EM&gt;What is a capital?&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Support for simple chit-chat (&lt;EM&gt;How are you today?&lt;/EM&gt;)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Those two functions cover two most important elements of our intelligent backend: &lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;QnA Maker&lt;/A&gt; (which can be used to implement the last two points) and &lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;LUIS&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Starting with Bot Composer&lt;/H2&gt;
&lt;P&gt;To begin development, you need to [install Bot Framework Composer][InstallComposer] - I recommend to do it as desktop application. Then, after starting it, click on &lt;STRONG&gt;New&lt;/STRONG&gt; button, and chose starting template for your bot: &lt;STRONG&gt;QnA Maker and LUIS&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-create1.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-create1.png" border="0" alt="Composer Create" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Once you do that, you will see the main screen of composer, with a list of triggers on the left, and the main pane to design dialogs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-composer-mainscreen.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-composer-mainscreen.png" border="0" alt="Composer Main Screen" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here, you can delete an unused trigger &lt;STRONG&gt;BuySurface&lt;/STRONG&gt; (which is left over from the demo), and go to &lt;STRONG&gt;Welcome message&lt;/STRONG&gt; to customize the phrase that the bot says to new user. The logic of Welcome Message trigger is a bit complex, you need to look for a box called &lt;STRONG&gt;Send a response&lt;/STRONG&gt;, and change the message in the right pane.&lt;/P&gt;
&lt;P&gt;The language used to define phrases is called &lt;STRONG&gt;Language generation&lt;/STRONG&gt;, or &lt;STRONG&gt;lg&lt;/STRONG&gt;. A few useful syntax rules to know:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A phrase starts with &lt;CODE class="language-plaintext highlighter-rouge"&gt;-&lt;/CODE&gt;. If want to chose from a number of replies, specify several phrases, and one of them will be selected randomly. For example: ```bash&lt;/LI&gt;
&lt;LI&gt;Hello, I am geography helper bot!&lt;/LI&gt;
&lt;LI&gt;Hey, welcome!&lt;/LI&gt;
&lt;LI&gt;Hi, looking forward to chat with you! ```&lt;/LI&gt;
&lt;LI&gt;Comments start with &lt;CODE class="language-plaintext highlighter-rouge"&gt;&amp;gt;&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Some additional definitions start with &lt;CODE class="language-plaintext highlighter-rouge"&gt;@&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;You can use &lt;CODE class="language-plaintext highlighter-rouge"&gt;${...}&lt;/CODE&gt; syntax for variable substitution (we will see an example of this later)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Connecting to Azure Services&lt;/H2&gt;
&lt;P&gt;To use intelligent backend, you need to create Azure resources for LUIS and QnA Maker and provide corresponding keys to Composer:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-how-to-azure-subscription#create-luis-resources-in-the-azure-portal/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create LUIS Authoring Resource&lt;/A&gt;, and make sure to remember the region in which it was created, and copy key from &lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt; page in Azure Portal.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/qnamaker/how-to/set-up-qnamaker-service-azure?tabs=v1&amp;amp;WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create QnA Maker Service&lt;/A&gt;, and copy corresponding key&lt;/LI&gt;
&lt;LI&gt;In Composer, go to bot settings by pressing &lt;STRONG&gt;Project Settings&lt;/STRONG&gt; button in the left menu (look for a wrench icon, or expand the menu if unsure). Under settings, fill in &lt;STRONG&gt;LUIS Authoring Key&lt;/STRONG&gt;, &lt;STRONG&gt;LUIS region&lt;/STRONG&gt; and &lt;STRONG&gt;QnA Maker Subscription key&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Starting the Bot&lt;/H2&gt;
&lt;P&gt;At this point, you can already start chatting with your bot. Click &lt;STRONG&gt;Start bot&lt;/STRONG&gt; in the upper right corner, and let some time for the magic to happen. When starting a bot, Composer actually creates and trains underlying LUIS model, builds bot framework project, and starts local web server with a copy of the bot, ready to serve your requests.&lt;/P&gt;
&lt;P&gt;To chat with a bot, click &lt;STRONG&gt;Test in emulator&lt;/STRONG&gt; button (you need to have &lt;A href="https://github.com/Microsoft/BotFramework-Emulator/blob/master/README.md" target="_blank" rel="noopener"&gt;Bot Framework Emulator&lt;/A&gt;) installed for this to work). This automatically opens up the chat window with all required settings, and you can start talking to your bot right away.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Creating QnA Maker Knowledge base&lt;/H2&gt;
&lt;P&gt;Let’s start with creating term dictionary using QnA Maker. Click on &lt;STRONG&gt;QnA&lt;/STRONG&gt; left menu, and then &lt;STRONG&gt;Create new KB&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-create.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-create.png" border="0" alt="QnA New KB" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Here you can either start from some existing data (provided as an URL to html, pdf, Word or Excel document), or start creating phrases from scratch.&lt;/P&gt;
&lt;P&gt;In most of the cases, you would have a document to start with, but in our case we will start from scratch. After creating a knowledge base, you can click &lt;STRONG&gt;Add QnA Pair&lt;/STRONG&gt; to add all question-answer combinations you need. Note that you can add several options of the question by using &lt;STRONG&gt;All alternative phrasing&lt;/STRONG&gt; link.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-phrases.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-phrases.png" border="0" alt="QnA Phrases" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In our case, I have added a &lt;EM&gt;how are you&lt;/EM&gt; phrase (with several options), and a phrase to explain the meaning of the word &lt;EM&gt;capital&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;Having added the phrases, we can start a bot and make sure that it correctly reacts to given phrases, or similar versions of those phrases - QnA maker does not require it to be an exact match, it looks for &lt;EM&gt;similar&lt;/EM&gt; phrases to make a decision on which answer to provide.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Adding Specific Actions with LUIS&lt;/H2&gt;
&lt;P&gt;To give our bot an ability to give capitals of countries, we need to add some specific functionality to look for a capital, triggered by a certain phrase. We definitely do not want to type all 200+ countries and their capitals into QnA Maker!&lt;/P&gt;
&lt;P&gt;A functionality to get information about a country is openly available via &lt;A href="https://restcountries.eu/" target="_blank" rel="noopener"&gt;REST Countries&lt;/A&gt; API. For example, if we make GET request to &lt;CODE class="language-plaintext highlighter-rouge"&gt;&lt;A href="https://restcountries.eu/rest/v2/name/Russia" target="_blank" rel="noopener"&gt;https://restcountries.eu/rest/v2/name/Russia&lt;/A&gt;&lt;/CODE&gt;, we will get JSON response like this:&lt;/P&gt;
&lt;DIV class="language-json highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;&lt;SPAN class="p"&gt;[&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="nl"&gt;"name"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"Russian Federation"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt;
   &lt;SPAN class="nl"&gt;"topLevelDomain"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:[&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;".ru"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;],&lt;/SPAN&gt;
   &lt;SPAN class="nl"&gt;"capital"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;:&lt;/SPAN&gt;&lt;SPAN class="s2"&gt;"Moscow"&lt;/SPAN&gt;&lt;SPAN class="p"&gt;,&lt;/SPAN&gt; &lt;SPAN class="err"&gt;...&lt;/SPAN&gt; &lt;SPAN class="p"&gt;}&lt;/SPAN&gt; &lt;SPAN class="p"&gt;]&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;To ask for a capital of a given country, a user will say something like &lt;EM&gt;What is a capital of Russia?&lt;/EM&gt;, or &lt;EM&gt;I want to know the capital of Italy&lt;/EM&gt;. This phrase intent can be recognized using LUIS, and the name of the country is also extracted.&lt;/P&gt;
&lt;P&gt;To add LUIS trigger, from the &lt;STRONG&gt;Design&lt;/STRONG&gt; page of the composer, select your bot dialog and press “…” next to it. You will see &lt;STRONG&gt;Add a trigger&lt;/STRONG&gt; option in the drop-down box. Select it, and then chose &lt;STRONG&gt;Intent recognized&lt;/STRONG&gt; as trigger type.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;EM&gt;&lt;STRONG&gt;Intent Recognized&lt;/STRONG&gt; is the most common trigger type. However, you can specify &lt;STRONG&gt;Dialog events&lt;/STRONG&gt;, that allow you to structure part of the conversation as a separate dialog, or some conversational activities, such as &lt;STRONG&gt;Handoff to human&lt;/STRONG&gt;.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Then, specify trigger phrases, using a variation of LG language. In our case, we will use the following:&lt;/P&gt;
&lt;DIV class="language-json highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;what&lt;/SPAN&gt; &lt;SPAN class="err"&gt;is&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Russia&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;?&lt;/SPAN&gt;
&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;I&lt;/SPAN&gt; &lt;SPAN class="err"&gt;want&lt;/SPAN&gt; &lt;SPAN class="err"&gt;to&lt;/SPAN&gt; &lt;SPAN class="err"&gt;know&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Italy&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;.&lt;/SPAN&gt;
&lt;SPAN class="err"&gt;-&lt;/SPAN&gt; &lt;SPAN class="err"&gt;Give&lt;/SPAN&gt; &lt;SPAN class="err"&gt;me&lt;/SPAN&gt; &lt;SPAN class="err"&gt;a&lt;/SPAN&gt; &lt;SPAN class="err"&gt;capital&lt;/SPAN&gt; &lt;SPAN class="err"&gt;of&lt;/SPAN&gt; &lt;SPAN class="p"&gt;{&lt;/SPAN&gt;&lt;SPAN class="err"&gt;country=Greece&lt;/SPAN&gt;&lt;SPAN class="p"&gt;}&lt;/SPAN&gt;&lt;SPAN class="err"&gt;!&lt;/SPAN&gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;Here, we specify a number of trigger phrases starting with &lt;CODE class="language-plaintext highlighter-rouge"&gt;-&lt;/CODE&gt;, and we indicate that we want to extract part of the phrase as an entity &lt;CODE class="language-plaintext highlighter-rouge"&gt;country&lt;/CODE&gt;. LUIS will automatically train a model to extract entities based on the provided utterances, so make sure to provide a number of possible phrases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT size="4" color="#0000FF"&gt;&lt;EM&gt;There are some pre-defined entity types, such as &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;datetimeV2&lt;/CODE&gt;, &lt;CODE class="language-plaintext highlighter-rouge"&gt;number&lt;/CODE&gt;&lt;EM&gt;, etc. Using pre-defined types is recommended, and entity type can be specified using &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;@ &amp;lt;entity_type&amp;gt; &amp;lt;entity_name&amp;gt;&lt;/CODE&gt;&lt;EM&gt; notation. In our case, we can use &lt;/EM&gt;&lt;CODE class="language-plaintext highlighter-rouge"&gt;geographyV2&lt;/CODE&gt;&lt;EM&gt; entity type, which extracts geographic locations, including countries.&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Once we have defined phrase recognizer, we need to add a block that will do the actual REST call and fetch information on the given country. Use use &lt;STRONG&gt;Send HTTP Request&lt;/STRONG&gt; block, and specify the following parameters:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Method = GET&lt;/LI&gt;
&lt;LI&gt;Url = &lt;CODE class="language-plaintext highlighter-rouge"&gt;&lt;A href="https://restcountries.eu/rest/v2/name/${@country" target="_blank" rel="noopener"&gt;https://restcountries.eu/rest/v2/name/${@country&lt;/A&gt;}&lt;/CODE&gt;. Here, &lt;CODE class="language-plaintext highlighter-rouge"&gt;${@country}&lt;/CODE&gt; will be substituted with the name of the recognized country.&lt;/LI&gt;
&lt;LI&gt;Result property = dialog.result&lt;/LI&gt;
&lt;LI&gt;Response type = json&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This would make the REST call to the API, and the result will be stored into &lt;CODE class="language-plaintext highlighter-rouge"&gt;dialog.result&lt;/CODE&gt; property. If we provided a valid country, json result will be automatically parsed, otherwise, invalid operation code will be recorded in &lt;CODE class="language-plaintext highlighter-rouge"&gt;dialog.result.statusCode&lt;/CODE&gt; - in our case, 404.&lt;/P&gt;
&lt;P&gt;To test if the call was successful and define different logic based on the result, we insert &lt;STRONG&gt;Branch: If/Else&lt;/STRONG&gt; block, and specify the following condition: &lt;CODE class="language-plaintext highlighter-rouge"&gt;= equals(dialog.result.statusCode,200)&lt;/CODE&gt;. True condition will correspond to the left branch, and we will insert &lt;STRONG&gt;Send a response&lt;/STRONG&gt; block there, with the following text:&lt;/P&gt;
&lt;DIV class="language-plaintext highlighter-rouge"&gt;
&lt;DIV class="highlight"&gt;
&lt;PRE class="highlight"&gt;&lt;CODE&gt;- A capital of ${@country} is ${dialog.result.content[0].capital}
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In case result code is not 200, the right branch will be executed, where we will insert an error message. Our final dialog should look like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-cooldialog.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-cooldialog.png" border="0" alt="" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Adding Preconfigured Chit-Chat Functionality&lt;/H2&gt;
&lt;P&gt;It would be nice if your bot could respond to more everyday phrases, such as &lt;EM&gt;How old are you?&lt;/EM&gt;, or &lt;EM&gt;Do you enjoy being a bot&lt;/EM&gt;. We can define all those phrases in QnA Maker, but that would take us quite some time to do so. Luckily, there is &lt;A href="https://github.com/microsoft/BotBuilder-PersonalityChat?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Project Personality Chat&lt;/A&gt; that contains a number of pre-defined QnA Maker knowledge bases for several languages, and for a number of personalities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Professional&lt;/LI&gt;
&lt;LI&gt;Friendly&lt;/LI&gt;
&lt;LI&gt;Witty&lt;/LI&gt;
&lt;LI&gt;Caring&lt;/LI&gt;
&lt;LI&gt;Enthusiastic&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;You can grab a link to the knowledge base &lt;A href="https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets" target="_blank" rel="noopener"&gt;from here&lt;/A&gt;, then go to &lt;A href="http://qnamaker.ai" target="_blank" rel="noopener"&gt;QnA Maker Portal&lt;/A&gt;, find your knowledge base, and add this URL link to your service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-qna-chitchat.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-qna-chitchat.png" border="0" alt="Adding URL to QnAMaker" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Having done that, click &lt;STRONG&gt;Save and Train&lt;/STRONG&gt;, and enjoy a talk to your bot! You can even try ask it about &lt;A href="https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#The_Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_is_42" target="_blank" rel="noopener"&gt;life, universe and everything&lt;/A&gt;!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Testing the Bot and Publishing to Azure&lt;/H2&gt;
&lt;P&gt;Now that our basic bot functionality is complete, we can test the bot in bot emulator:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-emulator-sample.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-emulator-sample.png" border="0" alt="Chat in bot emulator" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the bot is running locally, we can deploy it to Azure right from the Composer. If you go to &lt;STRONG&gt;Publish&lt;/STRONG&gt; from the left menu, you will be able to define &lt;STRONG&gt;Publishing profile&lt;/STRONG&gt; for your bot. Select &lt;STRONG&gt;Define new publishing profile&lt;/STRONG&gt;, and chose one of the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-publishprofile.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-publishprofile.png" border="0" alt="Publishing profiles" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;The most standard way to deploy is to use &lt;STRONG&gt;Azure Web App&lt;/STRONG&gt;. Composer will only require you to provide Azure subscription and resource group name, and it will take care of creating all the required resources (including bot-specific LUIS/QnA Maker instances) automatically. It may take awhile, but it will save you a lot of time and hassle of doing manual deployment.&lt;/P&gt;
&lt;P&gt;Once the bot is published to Azure, you can go to Azure portal and configure &lt;STRONG&gt;Channels&lt;/STRONG&gt; through which you bot would be available to external world, such as Telegram, Microsoft Teams, Slack or Skype.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="http://soshnikov.com/images/blog/bot-azure-channels.png" target="_blank" rel="noopener"&gt;&lt;IMG src="http://soshnikov.com/images/blog/bot-azure-channels.png" border="0" alt="Add Bot Channels" /&gt;&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1 id="conclusion"&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;Creating a bot using Bot Composer seems like an easy thing to do. In fact, you can create quite powerful bots almost without any code! And you can also hook them to your enterprise endpoint using such features as HTTP REST APIs and OAuth authorization.&lt;/P&gt;
&lt;P&gt;However, there are cases when you need to significantly extend bot functionality using code. In this case, you have several options:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Keep main bot authoring in Bot Composer, and develop &lt;A href="https://docs.microsoft.com/composer/how-to-add-custom-action/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Custom Actions&lt;/A&gt; in C#&lt;/LI&gt;
&lt;LI&gt;Export complete bot code using &lt;STRONG&gt;Custom Runtime&lt;/STRONG&gt; feature of Composer, which exports complete bot code in C# or Javascript, which you can then fully customize. This approach is not ideal, because you will lose the ability to maintain the source of your bot in Composer.&lt;/LI&gt;
&lt;LI&gt;Write a bot from the beginning in one of the supported languages (C#, JS, Python or Java) using &lt;A href="https://dev.botframework.com/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Bot Framework&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;&lt;FONT size="4"&gt;If you want to explore how the same Educational bot for Geography can be written in C#, check out this Microsoft Learn Module: &lt;A href="https://docs.microsoft.com/learn/modules/responsible-bots/?WT.mc_id=ca-13976-dmitryso" target="_blank" rel="noopener"&gt;Create a chat bot to help students learn with Azure Bot Service&lt;/A&gt;.&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;I am sure conversational approach to UI can prove useful in many cases, and Microsoft Conversational Platform offers you wide variety of tools to support all your scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Feb 2021 18:23:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/hello-bot-conversational-ai-on-microsoft-platform/ba-p/2139570</guid>
      <dc:creator>shwars</dc:creator>
      <dc:date>2021-02-18T18:23:54Z</dc:date>
    </item>
    <item>
      <title>Computer Vision Read (OCR) API previews 73 human languages and new features on cloud and on-premise</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-73-human-languages-and-new/ba-p/2121341</link>
      <description>&lt;H1&gt;Overview&lt;/H1&gt;
&lt;P&gt;Businesses today are applying Optical Character Recognition (OCR) and document AI technologies to rapidly convert their large troves of documents and images into actionable insights. These insights power robotic process automation (RPA), knowledge mining, and industry-specific solutions. However, there are several challenges to successfully implementing these scenarios at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;The challenge&lt;/H1&gt;
&lt;P&gt;Your customers are global, and their content is global so your systems should also speak and read international languages. Nothing is more frustrating than not reaching your global customers due to lack of support for their native languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Secondly, your documents are large, with potentially hundreds and even thousands of pages. To complicate things, they have print and handwritten style text mixed into the same documents. To make matters worse, they have multiple languages in the same document, possibly even in the same line.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thirdly, you are a business that’s trusted by your customers to protect their data and information. If your customers are in industries such as healthcare, insurance, banking, and finance, you have stringent data privacy and security needs. You need the flexibility to deploy your solutions on the world’s most trusted cloud or on-premise within your environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, you should not have to choose between world-class AI quality, world languages support, and deployment on cloud or on-premise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Computer Vision OCR (Read API)&lt;/H1&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt;Microsoft’s Computer Vision OCR (Read)&lt;/A&gt; technology is available as a Cognitive Services Cloud API and as Docker containers. Customers use it in diverse scenarios on the cloud and within their networks to help automate image and document processing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/TX7XwwIG5lw" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/TX7XwwIG5lw/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;What’s New&lt;/H1&gt;
&lt;P&gt;We are announcing Computer Vision's &lt;A href="https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005" target="_blank" rel="noopener"&gt;Read API v3.2 public preview&lt;/A&gt; as a cloud service and Docker container. It includes the following updates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support#optical-character-recognition-ocr" target="_blank" rel="noopener" data-linktype="relative-path"&gt;OCR for 73 languages&lt;/A&gt;&amp;nbsp;including Simplified and Traditional Chinese, Japanese, Korean, and several Latin languages.&lt;/LI&gt;
&lt;LI&gt;Natural reading order for the text line output.&lt;/LI&gt;
&lt;LI&gt;Handwriting style classification for text lines.&lt;/LI&gt;
&lt;LI&gt;Text extraction for selected pages for a multi-page document.&lt;/LI&gt;
&lt;LI&gt;Available as a&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers?tabs=version-3-2" target="_blank" rel="noopener" data-linktype="relative-path"&gt;Distroless container&lt;/A&gt;&amp;nbsp;for on-premise deployment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;First wave of language expansion&lt;/H1&gt;
&lt;P&gt;With the latest Read preview version, we are announcing &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support" target="_blank" rel="noopener"&gt;OCR support for 73 languages&lt;/A&gt;, including Chinese Simplified, Chinese Traditional, Japanese, Korean, and several Latin languages, a 10x increase from the Read 3.1 GA version.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks to Read’s universal model, to extract the text in these languages, use the Read API call without the optional language parameter. We recommend not using the language parameter if you are unsure of the language of the input document or image at run time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The latest Read preview supports the following languages:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Read3.2-Preview-Languages.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254311iAC5BDA5C8AD7426E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Read3.2-Preview-Languages.png" alt="Read 3.2 Preview Languages" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Read 3.2 Preview Languages&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, once you have &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?tabs=visual-studio&amp;amp;pivots=programming-language-rest-api#prerequisites" target="_blank" rel="noopener"&gt;created a Computer Vision resource&lt;/A&gt;, the following curl code will call the Read 3.2 preview with the sample image.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make the following changes in the command where needed:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Replace the value of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;&amp;lt;subscriptionKey&amp;gt;&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;with your subscription key.&lt;/LI&gt;
&lt;LI&gt;Replace the first part of the request URL (&lt;CODE&gt;westcentralus&lt;/CODE&gt;) with the text in your own endpoint URL.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation. The URL expires in 48 hours.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 class="lia-align-left"&gt;Natural reading order output (Latin languages)&lt;/H1&gt;
&lt;P class="lia-align-left"&gt;OCR services typically output text in a certain order in their output. With the new Read preview, choose to get the text lines in the natural reading order instead of the default left to right and top to bottom ordering. Use the new&amp;nbsp;&lt;EM&gt;readingOrder&lt;/EM&gt;&amp;nbsp;query parameter with the “&lt;EM&gt;natural&lt;/EM&gt;”&amp;nbsp;value for a more human-friendly reading order output as shown in the following example.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following visualization of the JSON formatted service response shows the text line order for the same document. Note the first column's text lines output in order before listing the second column and finally the third column.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ocr-read-order-example.png" style="width: 852px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254168i6B2E989758BBAB7A/image-size/large?v=v2&amp;amp;px=999" role="button" title="ocr-read-order-example.png" alt="OCR Read order example" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR Read order example&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;For example, the following curl code sample calls the Read 3.2 preview to analyze the &lt;A href="https://docs.microsoft.com/en-us/microsoft-365-app-certification/media/dec01.png" target="_blank" rel="noopener"&gt;sample newsletter image&lt;/A&gt; and output a natural reading order for the extracted text lines.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze?readingOrder=natural -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://docs.microsoft.com/en-us/microsoft-365-app-certification/media/dec01.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-left"&gt;&lt;SPAN style="color: inherit; font-family: inherit; font-size: 30px;"&gt;Handwriting style classification (Latin languages)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-left"&gt;When you apply OCR on business forms and applications, it’s useful to know which parts of the form has handwritten text in them so that they can be handled differently. For example, comments and the signature areas of agreements typically contain handwritten text. With the latest Read preview, the service will classify Latin languages-only text lines as handwritten style or not along with a confidence score.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;For example, in the following image, you see the appearance object in the JSON response with the style classified as handwriting along with a confidence score.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="sanjeev_jagtap_1-1613027644240.png" style="width: 726px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254124i9123CD1B35534690/image-size/large?v=v2&amp;amp;px=999" role="button" title="sanjeev_jagtap_1-1613027644240.png" alt="OCR handwriting style classification for text lines" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR handwriting style classification for text lines&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;The following code analyzes the &lt;A href="https://intelligentkioskstore.blob.core.windows.net/visionapi/suggestedphotos/2.png" target="_blank" rel="noopener"&gt;sample handwritten image&lt;/A&gt; with the Read 3.2 preview.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://intelligentkioskstore.blob.core.windows.net/visionapi/suggestedphotos/2.png\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 class="lia-align-left"&gt;Extract text from select pages of a document&lt;/H1&gt;
&lt;P class="lia-align-left"&gt;Many standard business forms have fillable sections followed by long informational sections that are identical between documents, and versions of those documents. At other times, you will be interested in applying OCR to specific pages of interest for business-specific reasons.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;The following curl code sample calls the Read 3.2 preview to analyze the &lt;A href="https://www.annualreports.com/HostedData/AnnualReports/PDF/NASDAQ_MSFT_2019.pdf" target="_blank" rel="noopener"&gt;financial report PDF document&lt;/A&gt; with the &lt;EM&gt;pages&lt;/EM&gt; input parameter set to the page range, "3-5".&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyze?pages=3-5 -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: &amp;lt;subscription key&amp;gt;" --data-ascii "{\"url\":\"https://www.annualreports.com/HostedData/AnnualReports/PDF/NASDAQ_MSFT_2019.pdf\"}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The response will include an&amp;nbsp;&lt;/SPAN&gt;&lt;CODE&gt;Operation-Location&lt;/CODE&gt;&lt;SPAN&gt;&amp;nbsp;header, whose value is a unique URL. You use this URL to query the results of the Read operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2-preview.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following JSON extract shows the resulting OCR output that extracted the text from pages 3, 4, and 5. You should see a similar output for your sample documents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;"readResults": [
      {
        "page": 3,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      },
      {
        "page": 4,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      },
      {
        "page": 5,
        "angle": 0,
        "width": 8.5,
        "height": 11,
        "unit": "inch",
        "lines": []
      }
]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;On-premise option with Distroless container&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sanjeev_jagtap_3-1613027644248.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254126i55D26341C58C22DE/image-size/small?v=v2&amp;amp;px=200" role="button" title="sanjeev_jagtap_3-1613027644248.png" alt="sanjeev_jagtap_3-1613027644248.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Read 3.2 preview OCR container provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All features from the Read cloud API preview&lt;/LI&gt;
&lt;LI&gt;Distroless container release&lt;/LI&gt;
&lt;LI&gt;Performance and memory enhancements&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;Install and run the Read containers&lt;/A&gt; to get started and find the recommended configuration settings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Get Started&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;Create a Computer Vision resource&lt;/A&gt; in Azure.&lt;/LI&gt;
&lt;LI&gt;Follow our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?tabs=visual-studio&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;SDK and REST API QuickStarts&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt; OCR (Read)&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;See the list of &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support" target="_blank" rel="noopener"&gt;OCR supported languages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;Read containers&lt;/A&gt; and download them from Docker Hub.&lt;/LI&gt;
&lt;LI&gt;Write to us at &lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 15 Mar 2021 00:18:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-73-human-languages-and-new/ba-p/2121341</guid>
      <dc:creator>sanjeev_jagtap</dc:creator>
      <dc:date>2021-03-15T00:18:16Z</dc:date>
    </item>
    <item>
      <title>Integrating AI: Best Practices and Resources to Get Started</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/ba-p/2115408</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We use&amp;nbsp;&lt;/SPAN&gt;&lt;A title="Microsoft AI Documentation" href="https://docs.microsoft.com/ai/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;AI (&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;A&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rtificial&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;I&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ntelligence)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;integrated applications daily&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;from search engines&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;optimized&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;find the most relevant content&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;recommendation engines for streaming or shopping.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;During&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;AI’s early years rising to popularity, improving applications with AI&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;was only possible for companies with big budgets dedicated to research and experts&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;preventing&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;companies&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;that&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;cannot&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;effort an AI team to compete.&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;T&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;oday&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AI is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;readily available for any product&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;without having to invest in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;research&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and development.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;There are open-source libraries that can help you train Machine Learning models like&amp;nbsp;&lt;/SPAN&gt;&lt;A title="TensorFlow and Azure Machine Learning" href="https://docs.microsoft.com/azure/machine-learning/how-to-train-tensorflow?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;TensorFlow&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;. With a fraction of the effort and the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cost, &lt;A title="Azure Cognitive Services" href="https://azure.microsoft.com/services/cognitive-services/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;pre-trained AI services&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are available to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;easily integrate into your applications&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;with &lt;A title="Custom Vision Rest APIs" href="https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&amp;amp;pivots=programming-language-csharp&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;APIs&lt;/A&gt; and &lt;A title="Custom Vision Web Tool" href="https://www.customvision.ai/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;UI based tools to train custom models&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;for your specific use case.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;In Integrating AI&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;series,&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;I&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;aim to&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;help you decide if and how to integrate AI into your applications,&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;get you started with&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Azure’s ready to use AI solutions, Cognitive&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;Services&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;and answer your most&amp;nbsp;&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;frequent questions&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;when getting started.&lt;/SPAN&gt;&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;L&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;et’s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;start with these fundamental questions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;What are the problems you can solve with AI?&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;What do you need to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;know&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;before&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;start&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ing to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;build your solution?&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;How&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;do&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;you measure&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the success of your new AI features&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)" data-ccp-parastyle-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;4893c645-83d4-4448-a154-c57500b90212|46&amp;quot;,&amp;quot;Properties&amp;quot;:[67122396,&amp;quot;&amp;quot;,134224900,&amp;quot;false&amp;quot;,134224901,&amp;quot;false&amp;quot;,134224902,&amp;quot;false&amp;quot;,134233614,&amp;quot;true&amp;quot;,201334293,&amp;quot;17&amp;quot;,201340122,&amp;quot;2&amp;quot;,201341983,&amp;quot;0&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,268442635,&amp;quot;30&amp;quot;,335551500,&amp;quot;12874052&amp;quot;,335551550,&amp;quot;1&amp;quot;,335551620,&amp;quot;1&amp;quot;,335559738,&amp;quot;240&amp;quot;,335559739,&amp;quot;80&amp;quot;,335559740,&amp;quot;259&amp;quot;,335560102,&amp;quot;2&amp;quot;,469769226,&amp;quot;Avenir Next LT Pro,Arial,Calibri&amp;quot;,469775450,&amp;quot;Heading3 (Designer)&amp;quot;,469775498,&amp;quot;BodyText (Designer)&amp;quot;,469777841,&amp;quot;Avenir Next LT Pro&amp;quot;,469777842,&amp;quot;Arial&amp;quot;,469777843,&amp;quot;Calibri&amp;quot;,469777844,&amp;quot;Calibri&amp;quot;,469777929,&amp;quot;Heading3 (Designer) Char&amp;quot;,469778129,&amp;quot;Heading3(Designer)&amp;quot;,469778324,&amp;quot;Normal&amp;quot;],&amp;quot;ClassId&amp;quot;:1073872969}" data-ccp-parastyle-linked-defn="{&amp;quot;ObjectId&amp;quot;:&amp;quot;4893c645-83d4-4448-a154-c57500b90212|60&amp;quot;,&amp;quot;Properties&amp;quot;:[134224900,&amp;quot;false&amp;quot;,134224901,&amp;quot;false&amp;quot;,134224902,&amp;quot;false&amp;quot;,134231262,&amp;quot;true&amp;quot;,134233614,&amp;quot;true&amp;quot;,201334293,&amp;quot;17&amp;quot;,201340122,&amp;quot;1&amp;quot;,201342447,&amp;quot;5&amp;quot;,201342448,&amp;quot;1&amp;quot;,201342449,&amp;quot;1&amp;quot;,268442635,&amp;quot;30&amp;quot;,335551500,&amp;quot;12874052&amp;quot;,469769226,&amp;quot;Avenir Next LT Pro,Arial,Calibri&amp;quot;,469775450,&amp;quot;Heading3 (Designer) Char&amp;quot;,469777841,&amp;quot;Avenir Next LT Pro&amp;quot;,469777842,&amp;quot;Arial&amp;quot;,469777843,&amp;quot;Calibri&amp;quot;,469777844,&amp;quot;Calibri&amp;quot;,469777929,&amp;quot;Heading3 (Designer)&amp;quot;,469778129,&amp;quot;Heading3(Designer)Char&amp;quot;,469778324,&amp;quot;Default Paragraph Font&amp;quot;],&amp;quot;ClassId&amp;quot;:1073872969}"&gt;W&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;hat&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;are&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;problems&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW96081487 BCX0" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW96081487 BCX0" data-ccp-parastyle="Heading3 (Designer)"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;you can solve with AI?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW96081487 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/qJGRd34Hnl0" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/qJGRd34Hnl0/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;AI is a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;groundbreaking&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;technology but not a magical solution for every&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;thing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. It is important to know if you are adding value or&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;solving a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n actual&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;user problem.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;There are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;complex products like Wikipedia and Reddit that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;have&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a lot of information but&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;use&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;crowdsourcing and simple search to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cater to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;unique needs&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;without the help of AI.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;To&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;make a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n informed decision&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, you need to start&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;with&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;your users&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;’&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;needs.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;What are the problems they face&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Is there a process that you can automi&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ze&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;like filling&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;expense forms&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that can be automated with Form Recognizer service?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Send voice&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;messages to your customers with updates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;using Speech Services&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Do they make complex choices while using your product tha&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;t&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;could be customized to your users with the use of Personalizer&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Do you need to improve&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the usability&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;of your application with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;voice interactions and Language Understanding&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;It is important to solve a real need for your users instead of assuming the solution th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;at will be useful.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;User research is the best way to figure out the issues and a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;lot&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;can be surfaced by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;user analytics.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can use Metrics Advisor AI service to detect&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;anomalies and&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;figure out future AI solutions&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;as well&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Once you have a clear definition of the problem&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and define how to measure success, it is time to explore&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;practical&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;solutions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;You can read about the &lt;A title="Azure customer stories" href="https://azure.microsoft.com/case-studies/?term=Cognitive+services&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure customer stories&lt;/A&gt; and learn from their methods and design process. For example, read about &lt;A title="BBC's customer story" href="https://customers.microsoft.com/story/754836-bbc-media-entertainment-azure?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;BBC's customer story&lt;/A&gt; before you read about the &lt;A title="BBC Technical Story" href="https://customers.microsoft.com/story/822271-bbc-deploys-beeb-a-custom-voice-assistant-on-azure?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;technical story&lt;/A&gt;&amp;nbsp; of using &lt;A title="Azure Speech Services Documentation" href="https://docs.microsoft.com/azure/cognitive-services/speech-service/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure's Speech&lt;/A&gt;, &lt;A title="Azure Bot Service" href="https://docs.microsoft.com/azure/bot-service/?view=azure-bot-service-4.0&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt; and &lt;A title="Language Understanding Services" href="https://docs.microsoft.com/azure/cognitive-services/luis/what-is-luis?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Language Understanding Services&lt;/A&gt; together to solve the customer needs they identified.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/NwVylAQGQhA" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/NwVylAQGQhA/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Most AI solutions can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;fall into two categories.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The first&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;major use case for AI is automati&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ng the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;mindless&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;repetitive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;jobs&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. If th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;e&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;users&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;of an expense report or a hiring application need to type in i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;nformation from a form or a receipt to your system, it is easily&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;automated&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;by&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;OCR (Optical Character Recognition)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;. Similar automations are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;possible for close captioning, translation,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;classifying images and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;automizing alert messages.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The second category of AI solutions can be categorized as complex human decisions based on data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You could give your friends recommendations&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;on&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;what&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to watch&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;next easily&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, knowing what they like, what they&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;don’t&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;like. For&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;example,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;a streaming serv&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ice with thousands of movies to choose from,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cannot&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;surface&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;relevant&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;content with simple filtering of the genres&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;or release dates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. I&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;t would&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;take forever to choose what to watch&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;by browsing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;unless you know the exact name of the movie. For a decision like recommendation amo&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ng thousands or millions of results, AI&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;might be better at recommending to your best friend,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;maybe even&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;better than you over time.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Understanding&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the language&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and intent of people is another example. A human can understand and classify a review as positive or negative easily. For machines to d&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;etect&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the same&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;emotions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;must&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;do more than&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;detect&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;certain words to get&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;sentiment&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;What do you need to know&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;before&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;start&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ing to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;build your solution?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Some problems are easier to solve than others with AI. Experimenting with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;different&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;tools to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;confirming&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;your solutions is important.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;All&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the Cognitive services are easy to try out and here is how to do that&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;A href="https://aidemos.microsoft.com/?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;AI Demos Website&lt;/A&gt; gives you a hand-on experience of Cognitive Services&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CogSerGif.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252682i9F4D0810243AC154/image-size/large?v=v2&amp;amp;px=999" role="button" title="CogSerGif.gif" alt="CogSerGif.gif" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;You can also download the &lt;A title="Intelligent Kiosk app" href="https://www.microsoft.com/p/intelligent-kiosk/9nblggh5qd84?activetab=pivot:overviewtab&amp;amp;WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;Intelligent Kiosk app&lt;/A&gt; to try out the demos on your local machine.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kiosk.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252684i046DA206F60F3EB8/image-size/large?v=v2&amp;amp;px=999" role="button" title="kiosk.png" alt="kiosk.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once you create an Azure resource, you can see the code samples, API call examples and try out the Rest API end points directly on the &lt;A title="Cognitive Services API Reference pages" href="https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-Preview-1/operations/Sentiment?WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;Cognitive Services API Reference pages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="API.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254086iD180186E4CE9F208/image-size/large?v=v2&amp;amp;px=999" role="button" title="API.png" alt="API descriptions" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;API descriptions&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="codeSamples.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254087i8531724DC81A4630/image-size/large?v=v2&amp;amp;px=999" role="button" title="codeSamples.png" alt="Code Samples" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Code Samples&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="req.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254088iF7CABD739294341F/image-size/large?v=v2&amp;amp;px=999" role="button" title="req.png" alt="Request" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Request&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN data-contrast="none"&gt;Will your users love your solution?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:80,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Scaling an application and polishing the user experience takes&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;most of&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the development time. It is better to try out&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;features&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;fast and adjust before making the investment in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;perfecting&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the wrong&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;experience&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You might assume a&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n application flow&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that users are going to interact&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;but users can surprise you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;their own&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;creative ways of using your tools&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;P&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rototype&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;applications quickly&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and get&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;user feedback&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;early on.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Power platform is one of the tools that allows you to create mobile apps that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;integrates&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;important&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AI capabilities without writing any code. With&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;power&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;platform, you can easily deploy and share your prototype&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, without leaving the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;platform&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;’&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;UI&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;After the free trial period, both training and using your AI models will cost but not as much as the development time of an&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;actual app with AI and having to make major changes after the release.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Check out some of the capabilities and use cases of &lt;A title="AI Builder" href="http://&amp;nbsp;https://docs.microsoft.com/ai-builder/model-types?WT.mc_id=aiml-10397-ayyonet#model-types" target="_blank" rel="noopener"&gt;AI Builder on Power&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="AI Builder" href="http://&amp;nbsp;https://docs.microsoft.com/ai-builder/model-types?WT.mc_id=aiml-10397-ayyonet#model-types" target="_blank" rel="noopener"&gt;Platform&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and how to train a &lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;custom vision model and&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;creating&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="No Code AI" href="https://techcommunity.microsoft.com/t5/apps-on-azure/how-to-create-a-no-code-ai-app-with-azure-cognitive-services-and/ba-p/1847264?WT.mc_id=aiml-0000-ayyonet" target="_blank" rel="noopener"&gt;&amp;nbsp;a mobile app on Power Platform in this article&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/VXD5ma2ZExw" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/VXD5ma2ZExw/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;There are other fast and easy options to add AI to your applications,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;without&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a big development investment, especially if you are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;adding the capabilities to an existing application.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can use a logic app to design an application on Azure platform to find twitter mentions of your brand and analyze the sentiment of the tweets.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;You can visualize the data on Power&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;BI&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;or your choice of visualization platform or tools.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:312}"&gt;Once you integrate your AI solution, you can make the new AI features to a limited group of users and compare the effectiveness of your solution with your non-AI features.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Start your learning journey&amp;nbsp;on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;A title="AI Developer resources" href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;AI Developer resources&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and sign up for a &lt;A title="Free Azure Account" href="https://azure.microsoft.com/en-us/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet" target="_blank" rel="noopener"&gt;free Azure Account&lt;/A&gt;&lt;A title="Free Azure Account" href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-10397-ayyonet&amp;nbsp;" target="_blank" rel="noopener"&gt;.&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Let us know the problems you are trying to solve and your specific use cases&amp;nbsp;on&amp;nbsp;the comments below.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Feb 2021 00:33:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/integrating-ai-best-practices-and-resources-to-get-started/ba-p/2115408</guid>
      <dc:creator>Yonet</dc:creator>
      <dc:date>2021-02-11T00:33:22Z</dc:date>
    </item>
    <item>
      <title>Accelerate search index development with Visual Studio Code</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-search-index-development-with-visual-studio-code/ba-p/2120941</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/search/search-what-is-azure-search" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; provides developers with APIs and tools to make it easy to add a great search experience to your application. There are tools available in the&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/search/search-import-data-portal" target="_blank" rel="noopener"&gt;portal&lt;/A&gt; to import data into a search index and &lt;A href="https://docs.microsoft.com/azure/search/search-get-started-dotnet" target="_self"&gt;SDKs&lt;/A&gt; to simplify the integration of search functionality into your code base.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, sometimes you need something in between: simpler than code, but more powerful than the portal. In these cases, it’s common to interact directly with the REST APIs to quickly update an indexer, add a document, or perform other standard tasks. Tools like &lt;A href="https://www.postman.com/" target="_blank" rel="noopener"&gt;Postman&lt;/A&gt; are great for this but building out API calls from scratch can quickly become tedious. You wouldn’t write an API call from scratch to add a document to Azure Storage—you’d use &lt;A href="https://azure.microsoft.com/features/storage-explorer/" target="_self"&gt;Azure Storage Explorer&lt;/A&gt;—and we don’t want you to have to do that for search either.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this in mind, we created the &lt;A href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch" target="_blank" rel="noopener"&gt;Visual Studio Code Extension for Azure Cognitive Search (Preview)&lt;/A&gt;. The&amp;nbsp;Visual Studio Code extension makes it easy to work with your search service using the full capabilities of the REST APIs while providing rich IntelliSense and snippets to help you. With the extension, you can create and update indexes and other components, add documents, search, and more. You’ll never need to struggle with remembering the correct syntax again.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Extension Functionality&lt;/H2&gt;
&lt;P&gt;The extension covers all the major REST API operations for Cognitive Search. Check out the examples below to see some of what’s possible and feel free to request additional functionality &lt;A href="https://github.com/microsoft/vscode-azurecognitivesearch/issues" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Browse all your Azure Cognitive Search services&lt;/H3&gt;
&lt;P&gt;Get access to all your search services in one place. You can quickly see all your indexes, indexers, and other components.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="overview.png" style="width: 595px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254023iFA5C51B05681CE1C/image-dimensions/595x419?v=v2" width="595" height="419" role="button" title="overview.png" alt="overview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create new indexes, indexers, data sources, skillsets, and synonym maps&lt;/H3&gt;
&lt;P&gt;You can create a new index or other component just by editing the JSON and saving the file. You can then read, update, or delete these components at any time.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-index.gif" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254020i58B58097B481768E/image-size/large?v=v2&amp;amp;px=999" role="button" title="create-index.gif" alt="create-index.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Take advantage of rich IntelliSense&lt;/H3&gt;
&lt;P&gt;The extension also includes IntelliSense to guide you as you’re building out your JSON. Instead of referencing external docs each time, you can see what parameters exist and what their allowed values are as you type.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="intellisense.gif" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254019i5F9CDA873D952292/image-size/large?v=v2&amp;amp;px=999" role="button" title="intellisense.gif" alt="intellisense.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;In addition to IntelliSense, the extension provides snippets or templates for building more complex objects, such as data sources and skillsets, so that you have a good starting point.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Add or update documents in the search index&lt;/H3&gt;
&lt;P&gt;Adding or updating documents is something that’s not possible in the portal today. With the extension, you can quickly add a document, and it will even save you some time by creating a JSON template for you based on your index definition.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-document-2.png" style="width: 940px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254021iF6A986F5150A5099/image-size/large?v=v2&amp;amp;px=999" role="button" title="create-document-2.png" alt="create-document-2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;You can view or update existing documents too.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Query your search indexes&lt;/H3&gt;
&lt;P&gt;Finally, once you’ve added documents to your search service, you can also query from within the extension and view the results side by side. You can even add multiple queries or save the queries to a file to refer to them later.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="all-searches.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/254022i75C787669C9DBA76/image-size/large?v=v2&amp;amp;px=999" role="button" title="all-searches.png" alt="all-searches.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Key use cases&lt;/H2&gt;
&lt;P&gt;If your security requirements mandate the use of &lt;A href="https://docs.microsoft.com/azure/search/service-create-private-endpoint" target="_blank" rel="noopener"&gt;Private Endpoints&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/azure/search/service-configure-firewall" target="_blank" rel="noopener"&gt;IP Firewalls&lt;/A&gt;, you’ll find that some functionality is no longer available in the portal. For these cases, the extension is a great alternative to the portal for interacting with your indexes and the other components of your search service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In other cases, if you find yourself constantly recreating indexes or making small tweaks to them or other search components, the extension can make it incredibly easy to make small updates such as adding a field to an index.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;P&gt;Regardless of how you’re trying to use Cognitive Search, this extension will likely make your life easier. To get started today, &lt;A href="https://aka.ms/vscode-search" target="_blank" rel="noopener"&gt;download the extension,&lt;/A&gt; and follow the related&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/search/search-get-started-vs-code" target="_self"&gt;quickstart&lt;/A&gt;. You’ll see just how quickly and easily you can get up and running with Cognitive Search using the Visual Studio Code Extension.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you run into any issues or have any questions, please feel free to reach out to us at &lt;A href="mailto:azuresearch_contact@microsoft.com" target="_blank" rel="noopener"&gt;azuresearch_contact@microsoft.com&lt;/A&gt; or raise an issue on the extension’s &lt;A href="https://github.com/microsoft/vscode-azurecognitivesearch/issues" target="_blank" rel="noopener"&gt;GitHub repo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="ms-editor-squiggler" style="color: initial; font: initial; font-feature-settings: initial; font-kerning: initial; font-optical-sizing: initial; font-variation-settings: initial; forced-color-adjust: initial; text-orientation: initial; text-rendering: initial; -webkit-font-smoothing: initial; -webkit-locale: initial; -webkit-text-orientation: initial; -webkit-writing-mode: initial; writing-mode: initial; zoom: initial; place-content: initial; place-items: initial; place-self: initial; alignment-baseline: initial; animation: initial; appearance: initial; aspect-ratio: initial; backdrop-filter: initial; backface-visibility: initial; background: initial; background-blend-mode: initial; baseline-shift: initial; block-size: initial; border-block: initial; border: initial; border-radius: initial; border-collapse: initial; border-end-end-radius: initial; border-end-start-radius: initial; border-inline: initial; border-start-end-radius: initial; border-start-start-radius: initial; inset: initial; box-shadow: initial; box-sizing: initial; break-after: initial; break-before: initial; break-inside: initial; buffered-rendering: initial; caption-side: initial; caret-color: initial; clear: initial; clip: initial; clip-path: initial; clip-rule: initial; color-interpolation: initial; color-interpolation-filters: initial; color-rendering: initial; color-scheme: initial; columns: initial; column-fill: initial; gap: initial; column-rule: initial; column-span: initial; contain: initial; contain-intrinsic-size: initial; content: initial; content-visibility: initial; counter-increment: initial; counter-reset: initial; counter-set: initial; cursor: initial; cx: initial; cy: initial; d: initial; display: block; dominant-baseline: initial; empty-cells: initial; fill: initial; fill-opacity: initial; fill-rule: initial; filter: initial; flex: initial; flex-flow: initial; float: initial; flood-color: initial; flood-opacity: initial; grid: initial; grid-area: initial; height: 0px; hyphens: initial; image-orientation: initial; image-rendering: initial; inline-size: initial; inset-block: initial; inset-inline: initial; isolation: initial; letter-spacing: initial; lighting-color: initial; line-break: initial; list-style: initial; margin-block: initial; margin: initial; margin-inline: initial; marker: initial; mask: initial; mask-type: initial; max-block-size: initial; max-height: initial; max-inline-size: initial; max-width: initial; min-block-size: initial; min-height: initial; min-inline-size: initial; min-width: initial; mix-blend-mode: initial; object-fit: initial; object-position: initial; offset: initial; opacity: initial; order: initial; origin-trial-test-property: initial; orphans: initial; outline: initial; outline-offset: initial; overflow-anchor: initial; overflow-wrap: initial; overflow: initial; overscroll-behavior-block: initial; overscroll-behavior-inline: initial; overscroll-behavior: initial; padding-block: initial; padding: initial; padding-inline: initial; page: initial; page-orientation: initial; paint-order: initial; perspective: initial; perspective-origin: initial; pointer-events: initial; position: initial; quotes: initial; r: initial; resize: initial; ruby-position: initial; rx: initial; ry: initial; scroll-behavior: initial; scroll-margin-block: initial; scroll-margin: initial; scroll-margin-inline: initial; scroll-padding-block: initial; scroll-padding: initial; scroll-padding-inline: initial; scroll-snap-align: initial; scroll-snap-stop: initial; scroll-snap-type: initial; shape-image-threshold: initial; shape-margin: initial; shape-outside: initial; shape-rendering: initial; size: initial; speak: initial; stop-color: initial; stop-opacity: initial; stroke: initial; stroke-dasharray: initial; stroke-dashoffset: initial; stroke-linecap: initial; stroke-linejoin: initial; stroke-miterlimit: initial; stroke-opacity: initial; stroke-width: initial; tab-size: initial; table-layout: initial; text-align: initial; text-align-last: initial; text-anchor: initial; text-combine-upright: initial; text-decoration: initial; text-decoration-skip-ink: initial; text-indent: initial; text-overflow: initial; text-shadow: initial; text-size-adjust: initial; text-transform: initial; text-underline-offset: initial; text-underline-position: initial; touch-action: initial; transform: initial; transform-box: initial; transform-origin: initial; transform-style: initial; transition: initial; user-select: initial; vector-effect: initial; vertical-align: initial; visibility: initial; -webkit-app-region: initial; border-spacing: initial; -webkit-border-image: initial; -webkit-box-align: initial; -webkit-box-decoration-break: initial; -webkit-box-direction: initial; -webkit-box-flex: initial; -webkit-box-ordinal-group: initial; -webkit-box-orient: initial; -webkit-box-pack: initial; -webkit-box-reflect: initial; -webkit-highlight: initial; -webkit-hyphenate-character: initial; -webkit-line-break: initial; -webkit-line-clamp: initial; -webkit-mask-box-image: initial; -webkit-mask: initial; -webkit-mask-composite: initial; -webkit-perspective-origin-x: initial; -webkit-perspective-origin-y: initial; -webkit-print-color-adjust: initial; -webkit-rtl-ordering: initial; -webkit-ruby-position: initial; -webkit-tap-highlight-color: initial; -webkit-text-combine: initial; -webkit-text-decorations-in-effect: initial; -webkit-text-emphasis: initial; -webkit-text-emphasis-position: initial; -webkit-text-fill-color: initial; -webkit-text-security: initial; -webkit-text-stroke: initial; -webkit-transform-origin-x: initial; -webkit-transform-origin-y: initial; -webkit-transform-origin-z: initial; -webkit-user-drag: initial; -webkit-user-modify: initial; white-space: initial; widows: initial; width: initial; will-change: initial; word-break: initial; word-spacing: initial; x: initial; y: initial; z-index: initial;"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Wed, 10 Feb 2021 21:34:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-search-index-development-with-visual-studio-code/ba-p/2120941</guid>
      <dc:creator>DerekLegenzoff</dc:creator>
      <dc:date>2021-02-10T21:34:00Z</dc:date>
    </item>
    <item>
      <title>How to use Cognitive Services and containers</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-use-cognitive-services-and-containers/ba-p/2113684</link>
      <description>&lt;P&gt;In this blog we are going to take a look at how we can run a selection of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3029145" target="_blank" rel="nofollow noopener"&gt;Cognitive Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in Docker compatible containers. This option of using these services can come in handy if you run into scenarios where your application can not connect to the cloud all the time or if you need more control over your data.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=Kg4nKWDo6OQ" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/Kg4nKWDo6OQ/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;A id="user-content-what-are-cognitive-services" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#what-are-cognitive-services" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;What are Cognitive Services&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Azure Cognitive Services are cloud-based services that expose AI models through a REST API. These services enable you to add cognitive features, like object detection and speech recognition to your applications without having data science skills. By using the provided SDKs in the programming language of your choice you can create application that can see (Computer Vision), hear (Speech), speak (Speech), understand (Language), and even make decisions (Decision).&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;&lt;A id="user-content-cognitive-services-in-containers" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#cognitive-services-in-containers" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Cognitive Services in containers&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Azure Cognitive Service in containers gives developers the flexibility in where to deploy and host the services that come with Docker containers and keeping the same API experience as when they where hosted in the Azure.&lt;/P&gt;
&lt;P&gt;Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;What are containers&lt;/STRONG&gt;&lt;BR /&gt;Containerization is an approach to software distribution in which an application or service, including its dependencies &amp;amp; configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks, and removed when no longer needed.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=hdfbn4Q8jbo" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/hdfbn4Q8jbo/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;When to use Cognitive Services in containers?&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;Running Cognitive Services in containers can be the solution for you if you have specific requirements or constraints making that make it impossible to run Cognitive services in Azure. The most common scenarios include connectivity and control over the data. If you are running Cognitive Services in Azure all the infrastructure is taken care of, running them in containers moves the infrastructure responsibility, like performance and updating the container, to you.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;A case where you choose for container could be, if your connection to Azure is not stable enough. For instance if you have 1000's of document on-prem and you want to run OCR. If you use the Computer Vision OCR endpoint in the cloud you would need to send all the documents to the end point in azure, while if you run the container locally you only need to send the billing information every 15 minutes to Azure.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-features-and-benefits" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#features-and-benefits" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Features and benefits&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Immutable infrastructure:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Enable DevOps teams' to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Control over data:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments – across data, management, identity, and security.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Control over model updates:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Flexibility in versioning and updating of models deployed in their solutions. Portable architecture: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to Azure Kubernetes Service, Azure Container Instances, or to a Kubernetes cluster deployed to Azure Stack. For more information, see Deploy Kubernetes to Azure Stack.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;High throughput / low latency:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Provide customers the ability to scale for high throughput and low latency requirements by enabling Cognitive Services to run physically close to their application logic and data. Containers do not cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scalability:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;With the ever growing popularity of containerization and container orchestration software, such as Kubernetes; scalability is at the forefront of technological advancements. Building on a scalable cluster foundation, application development caters to high availability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Which services are available&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;P&gt;Container support is currently available for a subset of Azure Cognitive Services, including parts of:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;TABLE&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH&gt;Group&lt;/TH&gt;
&lt;TH&gt;Service&lt;/TH&gt;
&lt;TH&gt;Documentation&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;Anomaly Detector&lt;/TD&gt;
&lt;TD&gt;Anomaly Detector&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/anomaly-detector/anomaly-detector-container-howto?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Computer Vision&lt;/TD&gt;
&lt;TD&gt;Read OCR (Optical Character Recognition)&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Spatial Analysis&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-container?tabs=azure-stack-edge&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Form Recognizer&lt;/TD&gt;
&lt;TD&gt;Form Recognizer&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/form-recognizer/form-recognizer-container-howto?" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Language Understanding&lt;/TD&gt;
&lt;TD&gt;Language Understanding&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/luis/luis-container-howto?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Speech&lt;/TD&gt;
&lt;TD&gt;Custom Speech-to-text&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=cstt&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Custom Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ctts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Speech-to-text&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=tts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Neural Text-to-speech&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Speech language detection&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=lid&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Text Analytics&lt;/TD&gt;
&lt;TD&gt;Key Phrase Extraction&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=keyphrase&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Text language detection&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=language&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD&gt;Sentiment analysis&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers?tabs=sentiment&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Face&lt;/TD&gt;
&lt;TD&gt;Face&lt;/TD&gt;
&lt;TD&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-how-to-install-containers?&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Documentation&lt;/A&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;A id="user-content-how-to-use-cognitive-services-in-containers" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#how-to-use-cognitive-services-in-containers" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;How to use Cognitive Services in containers&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;The use of the services in containers is exactly the same as if you would use them in Azure. The deployment of the container is the part that takes a bit of planning and research. The services are shipped in Docker Containers. This means that the containers can be deployed to any Docker compatible platform. This can be your local machine running Docker Desktop or a fully scalable Kubernetes installation in your on premise data center.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-generic-workflow" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#generic-workflow" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Generic workflow&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Create the resource in Azure&lt;/LI&gt;
&lt;LI&gt;Get the endpoint&lt;/LI&gt;
&lt;LI&gt;Retrieve the API Key&lt;/LI&gt;
&lt;LI&gt;Find the container for the service&lt;/LI&gt;
&lt;LI&gt;Deploy the container&lt;/LI&gt;
&lt;LI&gt;Use the container endpoint as you would use the API resource&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Optional you can mount your own storage and connect&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Application Insights&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;Tutorial: Run a Text to Speech container in an Azure Container Instance.&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;In this tutorial we are going to run a Cognitive Service Speech container in an Azure Container Instance and use the REST API to convert text into speech.&lt;/P&gt;
&lt;P&gt;To run the code below you need an Azure Subscription. if you don’t have an&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/free/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure subscription&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;you can get $200 credit for the first month. And have the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/cli/azure/what-is-azure-cli?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure command-line interface&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;installed. If you don't have the Azure CLI installed&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;follow this tutorial&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=8KuJKlDSNwA" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/8KuJKlDSNwA/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-1-create-a-resource-group" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#1-create-a-resource-group" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;1. Create a resource group&lt;/H3&gt;
&lt;P&gt;Everything in Azure always start with creating a Resource Group. A resource group is a resource that holds related resources for an Azure solution.&lt;/P&gt;
&lt;P&gt;To create a resource group using the CLI you have to specify 2 parameters, the name of the group and the location where this group is deployed.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az group create --name demo_rg --location westeurope
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;2. Create Cognitive Service resource&lt;/H3&gt;
&lt;P&gt;The next resource that needs to be created is a Cognitive Services. To create this resource we need to specify a few parameters. Besides the name and resource group, you need to specify the kind of cognitive service you want to create. For our tutorial we are creating a 'SpeechServices' service.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account create \
    --name speech-resource \
    --resource-group demo_rg \
    --kind SpeechServices \
    --sku F0 \
    --location westeurope \
    --yes
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-3-get-the-endpoint--api-key" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#3-get-the-endpoint--api-key" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;3. Get the endpoint &amp;amp; API Key&lt;/H3&gt;
&lt;P&gt;If step 1 and 2 are successfully deployed we can extract the properties we need for when we are going to run the container in the next step. The 2 properties we need are the endpoint URL and the API key. The speech service in the container is using these properties to connect to Azure every 15 minutes to send the billing information.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To retrieve endpoint:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account show --name speech-resource --resource-group demo_rg  --query properties.endpoint -o json
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;To retrieve the API keys:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az cognitiveservices account keys list --name speech-resource --resource-group demo_rg
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-3-deploy-the-container-in-an-aci" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#3-deploy-the-container-in-an-aci" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;3. Deploy the container in an ACI&lt;/H3&gt;
&lt;P&gt;One of the easiest ways to run a container is to use&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/container-instances/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Container Instances&lt;/A&gt;. With one command in the Azure CLI you can deploy a container and make it accessible for the everyone.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To create an ACI it take a few parameters. If you want your ACI to be accessible from the internet you need to specify the parameter: '--dns-name-label'. The URL for the ACI will look like this: http://{dns-name-label}.{region}.azurecontainer. The dns-name-label property needs to be unique.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az container create 
    --resource-group demo_rg \
    --name speechcontainer \
    --dns-name-label &amp;lt;insert unique name&amp;gt; \
    --memory 2 --cpu 1 \
    --ports 5000 \
    --image mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest \
    --environment-variables \
        Eula=accept 
        Billing=&amp;lt;insert endpoint&amp;gt; 
        ApiKey=&amp;lt;insert apikey&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;The deployment of the container takes a few minutes.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-4-validate-that-a-container-is-running" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#4-validate-that-a-container-is-running" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;4. Validate that a container is running&lt;/H3&gt;
&lt;P&gt;The easiest way to validate if the container is running, is to use a browser and open the container homepage. To do this you first need to retrieve the URL for the container. This can be done using the Azure CLI with the following command.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;az container show --name speechcontainer --resource-group demo_rg --query ipAddress.fqdn -o json
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Navigate to the URL on port 5000. The URL should look like this:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;&lt;A href="http://{dns-name-label}.{region}.azurecontainer.io:5000/" target="_blank" rel="noopener"&gt;http://{dns-name-label}.{region}.azurecontainer.io:5000/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;If everything went well you should see a screen like this:&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="container_is_running" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252211iD041E4C90262D356/image-size/medium?v=v2&amp;amp;px=400" role="button" title="container_is_running" alt="container_is_running" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN&gt;5.&amp;nbsp;&lt;/SPAN&gt;Submit your first task&lt;/H3&gt;
&lt;P&gt;The Text to Speech service in the container is a REST endpoint. To use it we would need to create a POST request. There are many ways to do a POST request. For our tutorial we are going to use Visual Studio Code to do this.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Requirements&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Download&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and Install Visual Studio Code&lt;/LI&gt;
&lt;LI&gt;Install a plugin called&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://marketplace.visualstudio.com/items?itemName=humao.rest-client&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;REST Client&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;If you have the Visual Studio Code with th REST Client installed create a file call: rest.http and copy past the code below in the file.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;POST http://&amp;lt;dns-name-label&amp;gt;.&amp;lt;region&amp;gt;.azurecontainer.io:5000/speech/synthesize/cognitiveservices/v1  HTTP/1.1
Content-Type: application/ssml+xml
X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm
Accept: audio/*

&amp;lt;speak version="1.0" xml:lang="en-US"&amp;gt;
    &amp;lt;voice name="en-US-AriaRUS"&amp;gt;
        The future we invent is a choice we make. 
        Not something that just happens.
    &amp;lt;/voice&amp;gt;
&amp;lt;/speak&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;UL&gt;
&lt;LI&gt;Change the name of the URL to the URL of your ACI.&lt;/LI&gt;
&lt;LI&gt;Next click on the Send Request link (just above the URL)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;On the right side of VS Code you should see the response of the API. In the top right corner you see "Save Response Body" click on the button and save the response as a .wav file. Now you can use any media player to play the response.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="vscode_api_response" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/252214i2C870D0807E43A24/image-size/large?v=v2&amp;amp;px=999" role="button" title="vscode_api_response" alt="vscode_api_response" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Learn more&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A title="Get started with a free Azure Account" href="https://azure.microsoft.com/free/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="noopener"&gt;Get started with a free Azure Account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678" target="_blank" rel="nofollow noopener"&gt;Get skilled on AI and ML – on your terms with Azure AI&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A title="AI Developer page" href="https://azure.microsoft.com/overview/ai-platform/dev-resources/?OCID=AID3029145&amp;amp;WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;AI Developer page&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A title="Watch Azure AI Essentials: Easily add AI to your applications" href="https://www.youtube.com/watch?v=Kg4nKWDo6OQ&amp;amp;list=PLLasX02E8BPBkMW8mAyNcRxk4e3l-l_p0&amp;amp;index=2" target="_blank" rel="nofollow noopener"&gt;Watch Azure AI Essentials: Easily add AI to your applications&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;A id="user-content-microsoft-learn" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#microsoft-learn" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Microsoft Learn&lt;/H3&gt;
&lt;P&gt;Microsoft Learn is a free, online training platform that provides interactive learning for Microsoft products and more.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;For this blog we have created a custom&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="Collection of Learn Modules" href="https://aka.ms/ai/learn/cognitive-containers" target="_blank" rel="nofollow noopener"&gt;Collection of Learn Modules&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;covering all the topics in depth.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H3&gt;&lt;A id="user-content-blogs-and-articles" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#blogs-and-articles" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;Blogs and articles&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/getting-started-with-azure-cognitive-services-in-containers/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Getting started with Azure Cognitive Services in containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/bringing-ai-to-the-edge/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Bringing AI to the edge&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/blog/running-cognitive-service-containers/?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Running Cognitive Service containers&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;BR /&gt;&lt;A id="user-content-on-microsoft-docs" class="anchor" href="https://github.com/hnky/blog/blob/master/How-to-use-Cognitive-Services-and-containers.md#on-microsoft-docs" target="_blank" rel="noopener" aria-hidden="true"&gt;&lt;/A&gt;On Microsoft Docs&lt;BR /&gt;&lt;BR /&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/cognitive-services-container-support?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers Support&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/container-reuse-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Create containers for reuse&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/azure-container-instance-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Deploy and run container on Azure Container Instance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/azure-kubernetes-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Deploy the Text Analytics language detection container to Azure Kubernetes Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/docker-compose-recipe?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Use Docker Compose to deploy multiple containers&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/container-faq?WT.mc_id=aiml-12167-heboelma" target="_blank" rel="nofollow noopener"&gt;Azure Cognitive Services containers frequently asked questions (FAQ)&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Feb 2021 06:43:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-use-cognitive-services-and-containers/ba-p/2113684</guid>
      <dc:creator>hboelman</dc:creator>
      <dc:date>2021-02-11T06:43:42Z</dc:date>
    </item>
    <item>
      <title>Build a natural custom voice for your brand</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice is a &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/text-to-speech" target="_blank" rel="noopener"&gt;Text-to-Speech&lt;/A&gt; (TTS) feature of Speech in Azure Cognitive Services that allows you to create a one-of-a-kind customized synthetic voice for your brand. Since its preview in September 2019, Custom Neural Voice has empowered organizations such as AT&amp;amp;T, Duolingo, Progressive, and Swisscom to develop branded speech solutions that delight users.&amp;nbsp;(For more details, read the &lt;A href="https://aka.ms/AAatzsx" target="_blank" rel="noopener"&gt;Innovation Stories blog&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Custom Neural Voice is now generally available (GA). It is important to note that although Custom Neural Voice is GA from a technological standpoint, interested &lt;A href="http://aka.ms/customneural" target="_blank" rel="noopener"&gt;customers must apply&lt;/A&gt; and be approved to use it. Alternatively, developers can add TTS capabilities to their apps quickly by creating an Azure Speech instance and selecting from over 200 pre-built TTS and Neural TTS voices across 54 languages/locales.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we’ll introduce how Custom Neural Voice works and share best practices in responsibly creating a highly natural brand voice for your apps. &amp;nbsp;If you have questions, join us at our ‘&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai-ama/bd-p/AzureAIAMA?ranMID=24542&amp;amp;ranEAID=je6NUbpObpQ&amp;amp;ranSiteID=je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q&amp;amp;epi=je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q&amp;amp;irgwc=1&amp;amp;OCID=AID2000142_aff_7593_1243925&amp;amp;tduid=(ir__z0vjacwkinyoagkyisqgt9flum2xpkxxktxeok6d00)(7593)(1243925)(je6NUbpObpQ-DsGawy0mnol6Mz.fyiJx7Q)()&amp;amp;irclickid=_z0vjacwkinyoagkyisqgt9flum2xpkxxktxeok6d00" target="_blank" rel="noopener"&gt;Ask-Microsoft-Anything&lt;/A&gt;’ on Wednesday, 2/10 at 9AMPT. &lt;A href="https://www.myeventurl.com/Events/Details/203" target="_blank" rel="noopener"&gt;Add to Calendar&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Your voice, your brand&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In a world where voice-based interactions are increasingly becoming the norm, your voice is your brand. A recognizable digital voice helps your customers connect with your brand in new ways.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In recent years we have seen increased interest from a broad range of companies across Media and Entertainment, Telecom, Automobile, Education, and Hospitality, who consider voice-based interactions from a range of devices like phones, speakers, TV/cable boxes, and cars as a key interaction point with their customers. These organizations are looking to have a consistent, branded experience delivered directly to their customers. To highlight one such example, below is an audio sample of the 'Flo' virtual chatbot from Progressive.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Voice Sample: 'Flo' from Progressive&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/flo_sample.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice empowers people and organizations in many ways. The following scenarios are examples of use cases where customers find Custom Neural Voice particularly useful and valuable:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Customer Service Chatbots&lt;/STRONG&gt; – Companies can automate their call center operation with conversational AI to answer calls from customers with a natural-sounding voice that conveys friendliness, empathy, and professionalism and other values that are important to companies. For example, Progressive is using Custom Neural Voice to enable their virtual version of Flo to help their customers with ‘everything from getting a free car insurance to general insurance questions’. &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Read the full story&lt;/A&gt;.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Voice Assistants&lt;/STRONG&gt; – Companies developing smart assistants on appliances, cars, and homes can use Custom Neural Voice to create a unique synthetic voice that conveys the brand of the company, the persona of the assistant and a speaking style that enables the best experience for their target users. With Custom Neural Voice, Swisscom was able to create a multilingual voice assistant that sound human and unique to Swisscom and resonates with its audience. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Read the full story&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Online Learning&lt;/STRONG&gt; – Education providers can add speech to their learning material with a voice that is suitable for the subjects and the students, thereby improving the engagement of the students and the effectiveness of the learning. Duolingo is using the Custom Neural Voice capability to develop stylized voices for their virtual characters for their online learning experience. &lt;A href="https://youtu.be/m-3-D7S0piw?t=672" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Audio Books&lt;/STRONG&gt; – Content publishers can turn written content into audio that is spoken with a synthetic voice to make it more accessible to the global audience. With Custom Neural Voice, the content publishers can create one or more unique voices with natural reading styles that match the subject and context of the content as well as the preference of the listeners. The Beijing Hongdandan Visually Impaired Service Center is using the Custom Neural Voice capability to produce audiobooks based on the voice of Lina, a trainer at the organization who is familiar to the people who are blind in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Assistive Technology and Real-time Translations&lt;/STRONG&gt; – Custom Neural Voice can be used in situations to assist people in need or improve accessibility.&amp;nbsp; When used as an assistive technology, people with speech impairment could use the technology to enable them to communicate with others with a voice that sounds like them. Custom Neural Voice can used in other situations such as real-time translation allowing people to communicate with others in a foreign language in a familiar voice.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Public Service Announcement&lt;/STRONG&gt; – Public service organizations can use Custom Neural Voice to create a voice that is suitable for public announcements, whether it is in an airport, a train terminal, or other venues. The use of synthetic voice provides the ability to generate announcements with dynamic content that cannot be recorded ahead of time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Benefit of Custom Neural Voice&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Traditionally, TTS requires a large volume of voice data—in the range of 10,000 lines or more—to produce a fluent voice model. Consequently, TTS models with fewer recorded lines tend to sound noticeably robotic.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the innovation of deep neural networks and a powerful base model built with speech data from many different speakers, Neural TTS can 'learn' the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Empowered with this technology, Custom Neural Voice enables users to build highly-realistic voices with just a small number of training audios. This new technology allows companies to spend a tenth of the effort traditionally needed to prepare training data while at the same time significantly increasing the naturalness of the synthetic speech output when compared to traditional training methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Listen to the samples created with Custom Neural Voice below. Or try more demos on the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Language &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;&lt;STRONG&gt;Human &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;&lt;STRONG&gt;TTS (Custom Neural Voice)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Lina (Hongdandan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lina-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lina-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;English (Australia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Thomas&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Thomas-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Thomas-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Angela&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Angela-happy-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Angela-happy-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;French (France)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Zoe (Swisscom)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Zoe-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Zoe-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;German (Germany)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Lara (Swisscom)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lara-human.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="198"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Lara-tts.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc62633372"&gt;&lt;/A&gt;How it works&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Neural Voice is based on Neural TTS technology that creates a natural-sounding voice. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally in a natural way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The underlying Neural TTS technology used for Custom Neural Voice consists of three major components: &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187" target="_blank" rel="noopener"&gt;Text Analyzer&lt;/A&gt;, &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;Neural Acoustic Model&lt;/A&gt;, and &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;Neural Vocoder&lt;/A&gt;. To generate natural synthetic speech from text, the text is first input into Text Analyzer, which provides output in the form of phoneme sequence. A phoneme is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text. Then the phoneme sequence goes into the Neural Acoustic Model to predict acoustic features that define speech signals, such as the timbre, speaking style, speed, intonations, and stress patterns, etc. Finally, the Neural Vocoder converts the acoustic features into audible waves so that synthetic speech is generated.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS voice models are trained using deep neural networks based on real voice recording samples. With the customization capability of Custom Neural Voice, you can adapt the Neural TTS engine to better fit your user scenarios. To create a custom neural voice, visit the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt; to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Depending on the use case, Custom Neural Voice can be used to convert text into speech in real-time (e.g., used in a smart virtual assistant) or generate audio content offline (e.g., used as in audiobook or instructions in e-learning applications) with the text input provided by the user.&amp;nbsp; This is made available &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech" target="_blank" rel="noopener"&gt;through REST APIs&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Speech SDK&lt;/A&gt;&lt;SPAN&gt;,&lt;/SPAN&gt; or a &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;no-code Audio Content Creation tool&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Building a Custom Neural Voice&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of Microsoft’s commitment to responsible AI, we are designing and releasing Custom Neural Voice with the intention of protecting the rights of individuals and society, fostering transparent human-computer interaction, and counteract the proliferation of harmful deepfakes and misleading content. For this reason, we have limited the access and use of Custom Neural Voice. &lt;A href="http://aka.ms/customneural" target="_blank" rel="noopener"&gt;Submit an intake form here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft requires every customer to obtain explicit written permission from the voice talent before creating a voice model (see &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Disclosure for Voice Talent&lt;/A&gt;). In addition, you must not use custom neural voice for certain prohibited use cases (see &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Code of Conduct&lt;/A&gt;) and must disclose the synthetic nature of the service to your users upon deployment of the custom voice model (see &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/concepts-disclosure-guidelines" target="_blank" rel="noopener"&gt;Disclosure Guidelines&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When preparing your recording script, make sure you include the following sentence to acquire the voice talent’s acknowledgement of using their voice data to create a TTS voice model and generate synthetic speech.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;“I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;As a technical safeguard intended to prevent misuse of Custom Neural Voice services, Microsoft will use this recording to verify that the voice talent’s voice in the script matches the voice provided in the training data through the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speaker-recognition-overview#speaker-verification" target="_blank" rel="noopener"&gt;Speaker Verification&lt;/A&gt; technology. Read more about this process in the &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Data and Privacy document&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the video below, we introduce how to use the Speech Studio to create a highly natural voice with your own data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/di3vKMhyLaY" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/di3vKMhyLaY/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Creating a great custom voice requires careful quality control in each step, from voice design, data preparation, to the deployment of the voice model to your system. &lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;This docs page&lt;/A&gt; outlines in more detail the characteristics, limitations and the best practices in designing and building a custom neural voice. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below are some key steps to take when creating a custom neural voice for your organization. (Note: this presumes you have applied and have been approved for use of Custom Neural Voice.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 1: Persona design&lt;/H4&gt;
&lt;P&gt;First, design a persona of the voice that represents your brand using a persona brief document that defines elements such as the features of the voice, and the character behind the voice. This will help to guide the process of creating a custom voice model, including defining the scripts, selecting your voice talent, training and voice tuning.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 2: Script selection&lt;/H4&gt;
&lt;P&gt;Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you are creating a customer service bot. Include different sentence types in your scripts, including statements, questions, exclamations, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 3: Preparing training data&lt;/H4&gt;
&lt;P&gt;We recommend that the audio recordings be captured in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model heavily depends on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Common issues with recordings include speaking style mismatch (e.g., not in an ‘excited’ manner that you want to the voice to be), unnatural speed, unstable breaks, wrong pronunciation on words, etc. It is recommended that you work with a voice director to control the recording quality. Follow the &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/record-custom-voice-samples" target="_blank" rel="noopener"&gt;recording guidance here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the recordings are ready, follow the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice-prepare-data" target="_blank" rel="noopener"&gt;instructions here&lt;/A&gt; to prepare the training data in the right format.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 4: Testing&lt;/H4&gt;
&lt;P&gt;Prepare test scripts for your voice model that cover the different use cases for your apps. It’s recommended that you use scripts within and outside the training dataset so you can test the quality more broadly for different content.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Step 5: Tuning and adjustment&lt;/H4&gt;
&lt;P&gt;The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, several adjustments can be made using &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;SSML (Speech Synthesis Markup Language)&lt;/A&gt; when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the TTS service to convert text into audio. The adjustments include change of pitch, rate, intonation, and pronunciation correction.&amp;nbsp; If the voice model is built with multiple styles, SSML can also be used to switch the styles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All of the SSML markups mentioned above can be passed directly to the API.&amp;nbsp; We also provide an online tool, &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;Audio Content Creation&lt;/A&gt;, that allows customers to fine-tune their audio output using a friendly UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Interested in building a custom neural voice? Check the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#customization" target="_blank" rel="noopener"&gt;languages&lt;/A&gt; supported. Sign up to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview#create-the-azure-resource" target="_blank" rel="noopener"&gt;Speech service on Azure&lt;/A&gt; and get started on the&amp;nbsp;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Speech Studio&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Besides the capability to customize TTS voice models, Microsoft offers over 200 neural and standard voices covering 54 languages and locales. With these Text-to-Speech voices, you can quickly add read-aloud functionality for a more accessible app design, or give a voice to chatbots to provide a richer conversational experiences to your users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;For more information:&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/AMA-SpeechCNV" target="_blank" rel="noopener"&gt;Join us&lt;/A&gt; during our ‘Ask Microsoft Anything’ on Wed., Feb. 10&lt;SUP&gt;th&lt;/SUP&gt; (9amPT) (&lt;A href="https://www.myeventurl.com/Events/Details/203" target="_blank" rel="noopener"&gt;add to Calendar&lt;/A&gt;)&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;Add Text-to-Speech to your apps today&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/customneural" target="_blank" rel="noopener"&gt;Apply for access to Custom Neural Voice&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context" target="_blank" rel="noopener"&gt;Learn more&lt;/A&gt; about responsible use of Custom Neural Voice&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try our demo&lt;/A&gt; to listen to existing neural voices&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 04 Feb 2021 05:15:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2021-02-04T05:15:47Z</dc:date>
    </item>
    <item>
      <title>QnA with Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;+ Azure Cognitive Search enables instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;over your search results&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Now, you&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;do not&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;need to spend time looking through your pile of documents to find the exact answer to your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;query&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;There will be an instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;coming up for the user query from the most relevant documents present in your system.&amp;nbsp; A s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;olution&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;where you can ingest your pile of documents and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;query&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;over&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;them&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to get the answer as well as related relevant documents to get more inform&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;This solution accelerator enables automatic bulk ingestion of documents for QnA processing via a Cognitive Search custom skill.&amp;nbsp; The sample UI showcases the combined experience of instant answers to your questions as well as the list of relevant documents.&amp;nbsp; Finally, the solution is easily deployed using a simple Deploy button, which sets up all necessary services in your Azure subscription&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="B1.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247969iBFE457D1CC9179EE/image-size/large?v=v2&amp;amp;px=999" role="button" title="B1.PNG" alt="B1.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2 aria-level="3"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Benefit&lt;/SPAN&gt;&lt;SPAN&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Converged search experience powering instant answer and relevant documents&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Search using natural language queries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;One-click deployment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Saves end user time during search.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Flexibility to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;enhance&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and edit&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;instant answers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The solution combines the power of both Azure Cognitive Search and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Maker to extract question-answer pairs from your documents before storing them in the index.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Once you deploy&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the solution&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, you get a single endpoint where for each end user query both&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the services&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;will be called&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;in parallel&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and you will get a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;combined result&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;with an instant answer powered by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Maker&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;along with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the relevant documents coming from Azure Cognitive Search.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp; Checkout the&amp;nbsp;&lt;A title="Cognitive Search Question Answering Solution Accelerator (github.com)&amp;nbsp;" href="https://github.com/Azure-Samples/search-qna-maker-accelerator" target="_self"&gt;Cognitive Search Question Answering Solution Accelerator (github.com)&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW148685197 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW148685197 BCX8" data-ccp-parastyle="heading 2"&gt;Architecture&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW148685197 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW148685197 BCX8" data-ccp-parastyle="heading 2"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP CommentStart SCXW148685197 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="A1.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247968i1DFD2E013A027938/image-size/large?v=v2&amp;amp;px=999" role="button" title="A1.PNG" alt="A1.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;This solution accelerator contains the following artifacts:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;ARM template to set up the solution&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Custom skill in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Azure&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ognitive Search, which ingests the data into&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;QnA&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Maker&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:60,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;User interface to view the results&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:60,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 aria-level="3"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Live Demo Link:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;You can view a live demo of this repo at the following link:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/qnaWithAzureSearchDemo" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://aka.ms/qnaWithAzureSearchDemo&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:240,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;File Type Supported:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Currently instant answers will only be available for the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-sources-and-content#file-and-url-data-types" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;file types supported by QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;By default, the logic in the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure Cognitive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Search service&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;indexer also ingests only the following file types: .pdf,.docx,.doc,.xlsx,.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;xls&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,.html,.rtf,.txt,.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;tsv&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. You can change this by modifying the &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;indexedFileNameExtensions&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; property in the &lt;/SPAN&gt;&lt;A href="https://github.com/jennifermarsman/cognitive-search-qna-solution/blob/main/CustomSkillForDataIngestion/QnAIntegrationCustomSkill/Assets/Indexer.json" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Indexer.json&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;Tutorial:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;NOTE: You need to have a&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;GitHub account&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and &lt;A title="Azure subscription" href="https://azure.microsoft.com/en-in/free/search/?&amp;amp;ef_id=CjwKCAiA6aSABhApEiwA6Cbm__JLI8gmtvf_CU83p4p9LOtvNL79avKBUrpDSNLNtOqfHPwrL2xjmBoCMqYQAvD_BwE:G:s&amp;amp;OCID=AID2100054_SEM_CjwKCAiA6aSABhApEiwA6Cbm__JLI8gmtvf_CU83p4p9LOtvNL79avKBUrpDSNLNtOqfHPwrL2xjmBoCMqYQAvD_BwE:G:s&amp;amp;dclid=CjkKEQiA6aSABhDamMvU3YfhmvEBEiQARvYBV-brWGJCeMzy4yHQaETcb2T8oireOC1K7_OlXTvkia7w_wcB" target="_self"&gt;Azure subscription&lt;/A&gt;&amp;nbsp;to try out this solution.&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="3"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;Resource creation and deployment:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;lick&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;A title="here to Deploy to Azure." href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fsearch-qna-maker-accelerator%2Fmain%2Fazuredeploy.json" target="_self"&gt;here to Deploy to Azure.&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;This&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;will take you to the create blade where all the information will be&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;pre-filled&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, as shown below. Cl&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ick&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Review+ Create button to proceed.&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T1.PNG" style="width: 805px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247990i1478AD64E6B80255/image-size/large?v=v2&amp;amp;px=999" role="button" title="T1.PNG" alt="T1.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;Your deployment process will take 4-5 minutes to complete. Once completed you will land&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW127374062 BCX8"&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;&amp;nbsp;up&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;&amp;nbsp;on the following page&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW127374062 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW127374062 BCX8"&gt;:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW127374062 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T2.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247971i9A250164A9FCF28E/image-size/large?v=v2&amp;amp;px=999" role="button" title="T2.PNG" alt="T2.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW208895423 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW208895423 BCX8"&gt;Click on Deployment details to check all the resources that have been created.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW208895423 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T8.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247993i7624A7F83AC8AADA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T8.PNG" alt="T8.PNG" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW20611999 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW20611999 BCX8" data-ccp-parastyle="heading 3"&gt;Initialization:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW20611999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;To initialize the solution, c&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;lick on the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;Output&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;s”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;button&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;on&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the left&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;C&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;opy the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;http trigger&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;to initialize&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;accelerator" value.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;O&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;pen&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;a new browser tab and paste th&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;is&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;URL into the browser. This will run for about a minute&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW106106251 BCX8"&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW106106251 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW106106251 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then you'll see a message indicating success or failure.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW106106251 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T4.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247974iCC761AD4821A73D0/image-size/large?v=v2&amp;amp;px=999" role="button" title="T4.PNG" alt="T4.PNG" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="TextRun SCXW224694491 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW224694491 BCX8"&gt;If the initialization is successful, then following message will appear:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW224694491 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T5.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247973iC120841094F795B6/image-size/large?v=v2&amp;amp;px=999" role="button" title="T5.PNG" alt="T5.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW224694491 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;Once&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the resources are initialized, you can access the portal through the&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW158062791 BCX8"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;UI portal link&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW158062791 BCX8"&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;”&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;val&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;ue&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW158062791 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW158062791 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the Output tab.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW158062791 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T6.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247975iF776FE75A4D974DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T6.PNG" alt="T6.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN class="TextRun SCXW108720999 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW108720999 BCX8" data-ccp-parastyle="heading 3"&gt;Upload Documents:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;You can upload the documents one by one through the UI portal, by going&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW10568960 BCX8"&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;to&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW10568960 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW10568960 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the Upload tab.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T7.PNG" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247976i064F83000CD342E8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="T7.PNG" alt="T7.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;You can also upload the documents in bulk, through&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;a&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;container.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL class="lia-list-style-type-disc"&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Go to your storage account.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T3.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247972i0877C45B7CC56167/image-size/large?v=v2&amp;amp;px=999" role="button" title="T3.PNG" alt="T3.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW206337952 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW206337952 BCX8"&gt;Click on Containers and select&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW206337952 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SpellingErrorV2 SCXW206337952 BCX8"&gt;qna&lt;/SPAN&gt;&lt;SPAN class="NormalTextRun SCXW206337952 BCX8"&gt;-container to upload the documents in bulk.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW206337952 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T9.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247977iC8FD7AF9E44693DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="T9.PNG" alt="T9.PNG" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T10.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247978i179188CB6221FCC2/image-size/large?v=v2&amp;amp;px=999" role="button" title="T10.PNG" alt="T10.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW206337952 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW120755345 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW120755345 BCX8"&gt;Use the Upload tab and select the multiple files you want to ingest. It will take some time to index the documents and to extract the Question Answer pairs out of the documents.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW120755345 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T11.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247981i2F1755C50C6172E7/image-size/large?v=v2&amp;amp;px=999" role="button" title="T11.PNG" alt="T11.PNG" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW73896261 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW73896261 BCX8" data-ccp-parastyle="heading 3"&gt;Question Answer Enhancement:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;Once the ingestion is complete, you can view all the Question Answer pairs extracted from the documents by&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;clicking on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW12433349 BCX8"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;Knowledge Base&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW12433349 BCX8"&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW12433349 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW12433349 BCX8"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW12433349 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T12.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247980iECB7617F408F35CF/image-size/large?v=v2&amp;amp;px=999" role="button" title="T12.PNG" alt="T12.PNG" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="EOP SCXW108720999 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW73896261 BCX8" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW12433349 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Play with your knowledge base&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;!&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&amp;nbsp;Y&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SpellingErrorV2 SCXW244092604 BCX8"&gt;ou&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;can also test&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;for different queries using the Test Pane. Once you are satisfied with the experience, click on&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Save and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextDeletion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;T&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;rain&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and then&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;“&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;Publish&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TrackChangeTextInsertion TrackedChange SCXW244092604 BCX8"&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;”&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW244092604 BCX8" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW244092604 BCX8"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;the changes to get these changes&amp;nbsp;reflected on your main portal.&lt;SPAN class="EOP SCXW244092604 BCX8" data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="T13.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/247982i003DA618BB6ED3AE/image-size/large?v=v2&amp;amp;px=999" role="button" title="T13.PNG" alt="T13.PNG" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN data-contrast="auto"&gt;This solution has been specifically created for our customers to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;address&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;long-term standing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ask&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;retrieve&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;an&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;instant answer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;from the relevant document&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. This solution currently covers the basic&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;functionality,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and we will keep adding more features based on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;user interaction and customer’s feedback.&amp;nbsp; Please feel free to drop us a mail at&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;A tabindex="-1" title="mailto:search-qna-solution@microsoft.com" href="mailto:search-qna-solution@microsoft.com" target="_blank" rel="noreferrer noopener"&gt;search-qna-solution@microsoft.com &lt;/A&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN data-contrast="auto"&gt;to provide your valuable feedback.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun CommentStart SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;Useful&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW23627323 BCX8" data-contrast="none"&gt;&lt;SPAN class="NormalTextRun SCXW23627323 BCX8" data-ccp-parastyle="heading 3"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Links:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;A title="Azure Cognitive Search documentation" href="https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search" target="_self"&gt;Azure Cognitive Search documentation&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/search/search-sku-tier" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Choose a pricing tier - Azure Cognitive Search | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/search/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Pricing - Search | Microsoft Azure&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#types-of-storage-accounts" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Storage account overview - Azure Storage | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/app-service/windows/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;App Service Pricing | Microsoft Azure&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;App Service plans - Azure App Service | Microsoft Docs&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;QnA Maker documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1#add-a-new-question-and-answer-set" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Add a new Question Answer pair in QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base?tabs=v1#test-the-knowledge-base" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Test your Knowledge Base in QnA Maker&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 03 Feb 2021 07:59:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2021-02-03T07:59:57Z</dc:date>
    </item>
    <item>
      <title>Unified Neural Text Analyzer: an innovation to improve Neural TTS pronunciation accuracy</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187</link>
      <description>&lt;H1&gt;Introducing Unified Neural Text Analyzer: an innovation for Neural Text-to-Speech pronunciation accuracy improvement &amp;nbsp;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This post is co-authored by Dongxu Han, Junwei Gan and Sheng Zhao&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt;&lt;SPAN&gt; (Neural TTS)&lt;/SPAN&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&amp;nbsp;part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. For example, &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt;, &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Progressive&lt;/A&gt; and &lt;A href="https://aka.ms/MotorolaSolutions" target="_blank" rel="noopener"&gt;Motorola Solutions&lt;/A&gt; are using Azure Neural TTS to develop conversational interfaces for their voice assistants in English speaking locales. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Swisscom&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;Poste Italiane&lt;/A&gt; are adopting neural voices in French, German and Italian to interact with their customers in the European market. &lt;A href="https://customers.azure.cn/hongdandan/index.html" target="_blank" rel="noopener"&gt;Hongdandan&lt;/A&gt;, a non-profit organization, is adopting neural voices in Chinese to make their online library audible for the blind people in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, we introduce our latest innovation in the Neural TTS technology that helps to improve the pronunciation accuracy significantly: Unified Neural Text Analyzer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is text analyzer?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS converts plain text into wave form via three modules: neural text analyzer, neural acoustic model and &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;neural vocoder&lt;/A&gt;. Text analyzer converts plain text to pronunciations, acoustic model converts pronunciations to acoustic features and finally vocoder generates waveforms. Text analyzer is the first link of the entire TTS system with results directly affecting the acoustic model and vocoder. The correct pronunciation of a word or phrase is the basic expectation in TTS, which delivers the right information to use but it’s not always easy. For example, “live” should be read different in “We &lt;EM&gt;live&lt;/EM&gt; in a mobile world” and “TV Apps and &lt;EM&gt;live&lt;/EM&gt; streaming offerings from The Weather Network” depending on context. If TTS reads them incorrectly, the intelligibility and naturalness of the content will be significantly influenced. Thus, text analyzer is important to TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Recent updates on Neural TTS include a major innovation to the text analyzer, called “UniTA” (Unified Neural Text Analyzer). UniTA is a unified text analyzer model, which seamlessly simplifies text analyzer workflow and reduces time latency in the runtime server. It adopts a multitask learning approach, jointly training all ambiguity models to solve context ambiguity and generate correct pronunciation and as a result reduces over 50% of pronunciation errors.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What are the challenges?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Generally, different natural languages have different linguistic grammar. In TTS, text analyzer needs to follow the same grammar of languages in order to generate correct pronunciations, which contains but isn’t limited to the following required grammar categories:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt; is the process of dividing the written text into meaningful units, such as words. In English and many other languages using some form of the Latin alphabet, the space is a good approximation of a word divider. On the other hand, in languages such as Chinese or Japanese, there is no spacing in sentences. Different word segmentation results may cause different meanings and pronunciations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Part-of-Speech Tagging&lt;/STRONG&gt; is the process of marking up a word in a text as corresponding to a particular part of speech (such as noun, verb, adj, adv and so on), based on both its definition and its context.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt; is the progress of classifying words according to shared inflectional categories such as person (first, second, third), number (singular vs. plural), gender (masculine, feminine, neuter) and case (nominative, oblique, genitive) with a given lexeme.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt; is the process of transforming digits or symbols to their standard format for disambiguation, for example: “$200" would be normalized as "two hundred dollars”, “200M" would be normalized as "two hundred meters” or “two hundred million”.&lt;/LI&gt;
&lt;LI&gt;Similar to Text Normalization, &lt;STRONG&gt;Abbreviation Expansion &lt;/STRONG&gt;is the process of transforming non-standard words to their standard format for disambiguation, for example: “VI" would be normalized as "six”, “St" would be normalized as "Saint” or “street”.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Polyphone Disambiguation&lt;/STRONG&gt; is the process of marking up polyphone word (heteronym word, which has one spelling but has more than one pronunciation and meaning) to its correct pronunciation based on its context.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;&lt;STRONG&gt;Example &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;English&lt;/EM&gt;]&lt;BR /&gt;Nice to meet u:) --&amp;gt; Nice / to / meet / u / :)&lt;/img&gt;&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Chinese&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;在圣诞节纽约大都会有演出 --&amp;gt; 在 / 圣诞节 / 纽约 / 大 / 都会(du1 hui4) / 有 / 演出&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Chinese&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;在圣诞节纽约大都会有演出 --&amp;gt; 在/ 圣诞节 / 纽约 / 大都(da4 dou1) / 会 / 有 / 演出&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Part-of-Speech&lt;/P&gt;
&lt;P&gt;Tagging&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Noun, | l ai v s |&lt;/EM&gt;]&lt;BR /&gt;Many people have lost their &lt;STRONG&gt;lives&lt;/STRONG&gt; since the cyclone because aid has not been able to be distributed.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Verb, | l I v s |&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;I also discovered the very angry raccoon that &lt;STRONG&gt;lives&lt;/STRONG&gt; near my porch.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Morphology&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Singular&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;1km --&amp;gt; one kilometer&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Plural&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;5km --&amp;gt; five kilometers&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Text Normalization&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Fraction, nine out of ten&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;The O.S. Speed T1202 ups the ante for race-winning performance, resulting in a power plant that will dominate &lt;STRONG&gt;9/10&lt;/STRONG&gt; scale competition.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Date, September tenth&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;1st episode will air &lt;STRONG&gt;9/10&lt;/STRONG&gt; with never before seen video of her birth!&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Abbreviation Expansion&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;Street&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;Oh man, biking from 24th &lt;STRONG&gt;St&lt;/STRONG&gt; BART to the 29th &lt;STRONG&gt;St&lt;/STRONG&gt; bikeshare station, that will be sweet.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;Saint&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;We continue to ask anyone who was in the wider area near &lt;STRONG&gt;St&lt;/STRONG&gt; Heliers School between 7.30am and 9am and witnessed any suspicious activity to contact police&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="13%"&gt;
&lt;P&gt;Polyphone Disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="86%"&gt;
&lt;P&gt;[&lt;EM&gt;p r ih - z eh 1 n t&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;The prices will &lt;STRONG&gt;present&lt;/STRONG&gt; the estimated discount utilizing the drug discount card.&lt;/P&gt;
&lt;P&gt;[&lt;EM&gt;p r eh 1 - z ax n t&lt;/EM&gt;]&lt;/P&gt;
&lt;P&gt;But our &lt;STRONG&gt;present&lt;/STRONG&gt; situation is not a natural one.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Most pronunciations are affected by these categories based on syntactic or semantic context, and these categories are all challenging disambiguation problems. The traditional TTS approach is a pipeline-based module called “text analyzer” with a series of models aimed at solving grammar disambiguation problems, which causes some of the following issues:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Complex model&lt;/STRONG&gt;. Redundant models are built and optimized separately but implemented together in the traditional text analyzer, which causes pipeline long and complicated.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Error propagation&lt;/STRONG&gt;. Accumulated errors caused by the models isolated would affect the final results.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High latency&lt;/STRONG&gt;. Models run one by one in the traditional text analyzer which is pipeline-based. Time cost is high in the runtime server. &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Compared to the traditional pipeline-based text analyzers, our Neural TTS proposes a Unified Neural Text Analyzer model (UniTA) to improve TTS pronunciation.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It builds a &lt;STRONG&gt;unified&lt;/STRONG&gt; text analyzer model, which greatly simplifies the text analyzer workflow and reduces time latency in the runtime server.&lt;/LI&gt;
&lt;LI&gt;It adopts a &lt;STRONG&gt;multitask learning approach&lt;/STRONG&gt;, jointly training all ambiguity models to solve context ambiguity and generate the correct pronunciations, reducing pronunciation errors by over 50%.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;How does UniTA improve pronunciations?&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Firstly, UniTA converts the input text to word embedding vectors through a pre-trained model. Word embedding is a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from vocabulary are mapped to vectors of real numbers. Conceptually, it involves a mathematical embedding from a space with many dimensions per word to a continuous vector space with a much lower dimension. Pre-training models like &lt;A href="https://www.microsoft.com/en-us/research/blog/a-holistic-representation-toward-integrative-ai/" target="_blank" rel="noopener"&gt;XYZ-Code&lt;/A&gt; have demonstrated unprecedented effectiveness for learning universal language representations based on unlabeled corpus with the method achieving great success in many tasks like language understanding and language generation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Secondly, a sequence tagging fine-tune strategy is adopted in the UniTA model. UniTA is designed as a typical word classification task, in which&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt; predicts word delimiter as word boundary or not.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Part-of-Speech&lt;/STRONG&gt; &lt;STRONG&gt;(POS)&lt;/STRONG&gt; predicts “noun”, “verb”, “adj” and so on to classify word part-of-speech.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt; predicts “singular”, “plural”, “masculine”, “feminine”, “neuter” and so on to classify word number, gender and case.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt; &lt;STRONG&gt;(TN)&lt;/STRONG&gt; predicts candidate digits to “cardinal”, “date”, “time”, “stock” or other TN categories, and then an auxiliary component “TN Rule” helps convert digits to word form based on predicted category.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Abbreviation Expansion&lt;/STRONG&gt; predicts candidate abbreviation word to its expanded form.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Polyphone disambiguation&lt;/STRONG&gt; predicts polyphone words’ pronunciation. An auxiliary component, “Lexicon” is used here for achieving non-polyphone words’ pronunciations.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Different from the traditional text analyzer training models , UniTA adopts a multitask learning approach to jointly train all categories together including word segmentation, part-of-speech tagging, morphology, abbreviation expansion, text normalization and polyphone disambiguation. The multitask learning approach shares hidden layers’ information and jointly trains across different tasks, which has achieved state-of-art achievements on many NLP tasks. In UniTA, hidden information is also shared in models when training.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, the sentence “&lt;EM&gt;St. John had a 10-3 run to build its lead to 78-64 with 4:44 left.&lt;/EM&gt;” in the training corpus is annotated as showed in the table below. “--” means there is no related tag in the category. In the word segmentation column, the phrase “10-3” is segmented as “10”, “-” and “3”; in the morphology column, the word “had” is annotated as “past tense”; in the text normalization column, “10-3” belongs to interpreting word “to” instead of “-“ while “4:44” belongs to the pattern using time format; In the abbreviation column, word “St.” is expanded as “Saint” rather than “Street”; and in the polyphone disambiguation column, the word “lead” is pronounced as [l i: d]. Actually, the word “lead” has two pronunciations, it is pronounced as [l i: d] when its POS is noun while pronounced as [l e d] when its POS is verb. This means the POS results and Polyphone results can share the inner information. In this way, multitask model improves UniTA accuracy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="613"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;Word&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;&lt;STRONG&gt;Word Segmentation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;&lt;STRONG&gt;Part-of-Speech&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;&lt;STRONG&gt;Morphology&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;&lt;STRONG&gt;Text Normalization&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;&lt;STRONG&gt;Abbreviation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;&lt;STRONG&gt;Polyphone disambiguation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;St.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;Saint&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;John&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;had&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Past tense&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;a&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Det&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;10-3&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;10 / - / 3&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as “ten to three”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;run&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Singular&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;to&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Particle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;build&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;its&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Det&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;lead&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Noun&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Singular&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;l i: d&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;to&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Particle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;78-64&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;78 / - / 64&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as “seventy-eight to sixty-four”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;with&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Prep&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;4:44&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;4 / : / 44&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Num&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;numbers are predicted as time format&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;left&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Verb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;Past participle&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="52"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="133"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="62"&gt;
&lt;P&gt;Symbol&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="98"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="94"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="93"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80"&gt;
&lt;P&gt;--&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;UniTA model predicts categories’ results together in the neural TTS runtime service. The same as training, UniTA converts the plain texts to word embeddings and then the multitask sequence tagging model predicts all the categories’ results. Some auxiliary modules are embedded after fine-tuning categories to further improve pronunciations. Finally, the pronunciation results are generated from UniTA.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is the figure of the UniTA model structure in Neural TTS:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="UniTA-Diagram.png" style="width: 747px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249840i3AB483F3C9C9FE60/image-size/large?v=v2&amp;amp;px=999" role="button" title="UniTA-Diagram.png" alt="UniTA model diagram" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;UniTA model diagram&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Pronunciation accuracy improved with UniTA&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Compared with the traditional TTS text analyzer, UniTA reduces over 50% of pronunciation errors in improving pronunciation accuracy. It is already used many neural voice languages such as English (United States), English (United Kingdom), Chinese (Mandarin, simplified), Russian (Russia), German (Germany), Japanese (Japan), Korean (Korea), Polish (Poland) and Finnish (Finland). Due to varying types of grammar in language, not all categories are suitable for every language. For example, Chinese and Japanese heavily depend on word segmentation and polyphone while these languages don’t need morphology or abbreviation expansion.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are some samples of the pronunciation improvement using UniTA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;&lt;STRONG&gt;Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;STRONG&gt;Input text&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;(target word bolded)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;STRONG&gt;Previous pronunciation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;STRONG&gt;Current pronunciation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;太子与三殿下行过礼后坐了片刻就离开了。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;“三殿&lt;/SPAN&gt; / &lt;SPAN&gt;下行 &lt;/SPAN&gt;/ &lt;SPAN&gt;过礼”&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-1-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;“三殿下 &lt;/SPAN&gt;/ &lt;SPAN&gt;行过礼”&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-1-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;叶奎最终还是在剧痛下泄了气&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;“&lt;SPAN&gt;剧痛 &lt;/SPAN&gt;/ &lt;SPAN&gt;下泄了气&lt;/SPAN&gt;”&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-2-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;“&lt;SPAN&gt;剧痛下 &lt;/SPAN&gt;/ &lt;SPAN&gt;泄了气&lt;/SPAN&gt;”&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/WordSeg-2-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;German (Germany)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;kulturform&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;kult+urform&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/kulturform.old.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;kultur+form&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/kulturform.new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Word Segmentation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Korean (Korea)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;해외감염&lt;STRONG&gt;병&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;h&lt;SPAN&gt;̬ɛ&lt;/SPAN&gt;w&lt;SPAN&gt;ɛ&lt;/SPAN&gt;g&lt;SPAN&gt;̥&lt;/SPAN&gt;mj&lt;SPAN&gt;ʌ&lt;/SPAN&gt;m&lt;STRONG&gt;b&lt;/STRONG&gt;j&lt;SPAN&gt;ʌ&lt;/SPAN&gt;ŋ&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ko-kr_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;h̬ɛwɛg̥mjʌm&lt;STRONG&gt;p&lt;/STRONG&gt;jʌŋ&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ko-kr_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Morphology - case ambiguity&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;Количество ударов по воротам&lt;/SPAN&gt; (15 &lt;SPAN&gt;против &lt;/SPAN&gt;&lt;STRONG&gt;7)&lt;/STRONG&gt; &lt;SPAN&gt;также говорит о преимуществе чемпионов мира&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;Семь&lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ru-ru_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;Семи &lt;/SPAN&gt;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ru-ru_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Abbreviation Expansion&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;Joined &lt;STRONG&gt;TX&lt;/STRONG&gt; Army National Guard in 1979.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;T.X.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TX-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Texas&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TX-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Text Normalization&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;The Downtown Cabaret Theatre’s Main Stage Theatre division concludes its &lt;STRONG&gt;2010/11&lt;/STRONG&gt; season with the Tony Award winning musical, in the heights by Lin-Manuel Miranda.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;November 2010&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/date-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;2010 to 2011&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/date-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Chinese (Mandarin, simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;&lt;SPAN&gt;卓文君听琴后，理解了琴&lt;STRONG&gt;曲&lt;/STRONG&gt;的含意，不由脸红耳热，心驰神往。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;qu1&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/poli-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;qu3&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/poli-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;English (United States)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;I received a copy early in November, and &lt;STRONG&gt;read&lt;/STRONG&gt; and contemplated it's provisions with great satisfaction.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/read-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/read-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="112"&gt;
&lt;P&gt;Polyphone disambiguation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="130"&gt;
&lt;P&gt;Japanese (Japan)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="122"&gt;
&lt;P&gt;パッケージには、富士屋ホテルが発刊した「We Japanese&lt;SPAN&gt;」&lt;STRONG&gt;内&lt;/STRONG&gt;の説明用の挿絵を採用。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="124"&gt;
&lt;P&gt;&lt;SPAN&gt;うち&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;(w u - ch i)&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ja-jp_baseline.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;&lt;SPAN&gt;ない&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;(n a - y i)&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ja-jp_improvement.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Hear how the Cortana voice pronounces each word accurately with UniTA.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/3ikql0ghLkE" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/3ikql0ghLkE/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Get started&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to continue to power accurate, natural and intuitive voice experiences for customers world-wide. Azure Text-to-Speech service provides more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech" target="_blank" rel="noopener"&gt;200 voices in over 50 languages&lt;/A&gt; for developers all over the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Let us know how you are using or plan to use Neural TTS voices in this&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener noreferrer"&gt;form&lt;/A&gt;&lt;SPAN&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing your experience and developing more compelling services together with you for the developers around the world.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our &lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 28 Jan 2021 09:38:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2021-01-28T09:38:10Z</dc:date>
    </item>
    <item>
      <title>Get skilled on AI and ML – on your terms with Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678</link>
      <description>&lt;P&gt;Azure’s AI portfolio has options for every developer and data scientist, and we’re committed to empowering you to develop applications and machine learning models on your terms. Azure enables you to develop in your preferred language, environment, and machine learning framework, and allows you to deploy anywhere - to the cloud, on-premises, or the edge. We help improve your productivity regardless of your skill level, with code-first and low code/no code options which can help you accelerate the development process. We’re also devoted to empowering you with resources to help you get started with Azure AI and machine learning, grow your skills, and start building impactful solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Announcing new AI &amp;amp; ML resource pages for developers and data scientists&lt;/H2&gt;
&lt;P&gt;Today we’re excited to announce new resources pages on Azure.com, with a rich set of content for &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;data scientists&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;developers&lt;/A&gt;. Whether you’re new to AI and ML, or new to Azure, the videos, tutorials, and other content on these pages will help you get started.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Learn how your peers around the world are using Azure AI to develop AI and machine learning solutions on their terms to solve business challenges.&lt;/LI&gt;
&lt;LI&gt;Grow your skills with curated learning journeys to help your skill up on Azure AI and Machine Learning in 30 days. Each learning journey has videos, tutorials, and hands-on exercises to help prepare you to pass a Microsoft certification in just 4 weeks. Upon completing the learning journey, you’ll be eligible to receive 50% off a Microsoft Certification exam.&lt;/LI&gt;
&lt;LI&gt;Engage with our engineering teams and stay up to date with the latest innovations on our &lt;A href="https://aka.ms/AI_Hub" target="_blank" rel="noopener"&gt;AI Tech Community&lt;/A&gt;, where you’ll find blogs, discussion forums, and more.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-center"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="learn.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/250055i007A7A09B08E9477/image-size/large?v=v2&amp;amp;px=999" role="button" title="learn.jpg" alt="learn.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Pictured above: ML learning journey for developers and data scientists.&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Register for the Azure AI Hackathon&lt;/H2&gt;
&lt;P&gt;Finally, put your skills to the test by entering the &lt;A href="https://aka.ms/AzureAIHackathon" target="_blank" rel="noopener"&gt;Azure AI Hackathon&lt;/A&gt;, which starts today and will run through March 22&lt;SUP&gt;nd&lt;/SUP&gt;, 2021. Winners will be announced in early April. The most innovative and impactful projects will win prizes up to $10,000 USD. We look forward to seeing what you build with Azure AI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started today&lt;/H2&gt;
&lt;P&gt;Check out the pages to get started with your 30-day learning journey, and register for the hackathon:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/dev-resources/?OCID=AID3028733" target="_blank" rel="noopener"&gt;AI Developer Resources&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/overview/ai-platform/data-scientist-resources?OCID=AID3028733" target="_blank" rel="noopener"&gt;Data Scientist Resources&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/AzureAIHackathon" target="_blank" rel="noopener"&gt;Azure AI Hackathon&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 27 Jan 2021 21:37:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/get-skilled-on-ai-and-ml-on-your-terms-with-azure-ai/ba-p/2103678</guid>
      <dc:creator>Anand_Raman</dc:creator>
      <dc:date>2021-01-27T21:37:16Z</dc:date>
    </item>
    <item>
      <title>How to build a voice-enabled grocery chatbot with Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-voice-enabled-grocery-chatbot-with-azure-ai/ba-p/2096079</link>
      <description>&lt;P&gt;Chatbots have become increasingly popular in providing useful and engaging experiences for customers and employees. Azure services allow you to quickly create bots, add intelligence to them using AI, and customize them for complex scenarios.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through an exercise which you can complete in under two hours, to get started using Azure AI Services. This intelligent grocery bot app can help you manage your shopping list using voice commands. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="iPhoneview.png" style="width: 201px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249349iBBA75F61572086FD/image-size/medium?v=v2&amp;amp;px=400" role="button" title="iPhoneview.png" alt="iPhoneview.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add or delete grocery items by dictating them to Alexa.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Easily access the grocery list through an app.&lt;/LI&gt;
&lt;LI&gt;Check off items using voice commands; for example, “Alexa, remove Apples from my grocery list."&lt;/LI&gt;
&lt;LI&gt;Ask Alexa to read the items you have in your grocery list.&lt;/LI&gt;
&lt;LI&gt;Automatically organize items by category to help save time at the store.&lt;/LI&gt;
&lt;LI&gt;Use any laptop or &lt;A href="https://azure.microsoft.com/en-us/services/app-service/web/" target="_blank" rel="noopener"&gt;Web Apps&lt;/A&gt; to access the app and sync changes across laptop and phone.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/?OCID=AID3024570" target="_blank" rel="noopener"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://ms.portal.azure.com/#home?OCID=AID3024570" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.developer.amazon.com/en-US/alexa" target="_blank" rel="noopener"&gt;Amazon Alexa account&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Python 3.6 or above&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Key components:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/bot-service/" target="_blank" rel="noopener"&gt;Azure Bot Service&lt;/A&gt; to develop bot and publish to Alexa channel.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://dev.botframework.com/" target="_blank" rel="noopener"&gt;Microsoft Bot Framework Emulator&lt;/A&gt; to test and debug bots using Bot Framework SDK.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.developer.amazon.com/en-US/alexa" target="_blank" rel="noopener"&gt;Alexa skills&lt;/A&gt; to interact with the bot using voice commands via Amazon Alexa.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/" target="_blank" rel="noopener"&gt;Language Understanding&lt;/A&gt; to help users interact with the bot with natural language, by enabling the bot to understand user intent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Solution Architecture &lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="App Ref Architecture.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249350i5120341D7F826E08/image-size/medium?v=v2&amp;amp;px=400" role="button" title="App Ref Architecture.png" alt="App Ref Architecture.png" /&gt;&lt;/span&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The user accesses the chatbot by invoking it as an Alexa skill.&lt;/LI&gt;
&lt;LI&gt;User is authenticated with Azure Active Directory.&lt;/LI&gt;
&lt;LI&gt;User interacts with the chatbot powered by Azure Bot Service; for example, user requests bot to add grocery items to a list.&lt;/LI&gt;
&lt;LI&gt;Azure Cognitive Services process the natural language request to understand what the user wants to do. (Note: If you wanted to give your bot its own voice, you can choose from over 200 voices and 54 languages/locales. &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try the demo&lt;/A&gt; to hear the different natural sounding voices.)&lt;/LI&gt;
&lt;LI&gt;The bot adds or removes content in the database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Another visual of the flow of data within the solution architecture is shown below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="App flow.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249351i658B09F9C4B31CCA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="App flow.png" alt="App flow.png" /&gt;&lt;/span&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Implementation&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps involved in creating the app along with some sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’ll start by creating an Azure Bot Service instance, and adding speech capabilities to the bot using the Microsoft Bot Framework and the Alexa skill. Bot Framework, along with Azure Bot Service, provides the tools required to build, test, deploy, and manage the end-to-end bot development workflow. In this example, we are integrating Azure Bot Service with Alexa, which can process speech inputs for our voice-based chatbot. However, for chatbots deployed across multiple channels, and for more advanced scenarios, we recommend using Azure’s &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview" target="_blank" rel="noopener"&gt;Speech service&lt;/A&gt; to enable voice-based scenarios. &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Try the demo&lt;/A&gt; to listen to the over 200 high quality voices available across 54 languages and locales.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;The first step in the process is to login into Azure portal and follow the steps &lt;A href="https://azure.microsoft.com/en-us/services/bot-service/#pricing" target="_blank" rel="noopener"&gt;here&lt;/A&gt; to create an Azure Bot Service resource and a web app bot. To add voice capability to the bot, click on channels to add Alexa (see the below snapshot) and note the Alexa Service Endpoint URI.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Azure Bot Service Channels.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249353iBED4C224680F9538/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Azure Bot Service Channels.png" alt="Azure Bot Service Channels" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure Bot Service Channels&lt;/span&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Next, we need to log into the Alexa Developer Console and create an Amazon Alexa skill. After creating the skill, we are presented with the interaction model.&amp;nbsp;&lt;SPAN&gt;Replace the JSON Editor with the below example phrases.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{

&amp;nbsp; "interactionModel": {

&amp;nbsp;&amp;nbsp;&amp;nbsp; "languageModel": {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "invocationName": "Get grocery list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "intents": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.FallbackIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.CancelIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&amp;nbsp;&amp;nbsp;&amp;nbsp;

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.HelpIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.StopIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "AMAZON.NavigateHomeIntent",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": []

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "Get items in the grocery",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "slots": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "name": "name",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "type": "AMAZON.US_FIRST_NAME"

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ],

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "samples": [

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"Get grocery items in the list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "Do I have bread in my list",

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ]

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ],

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "types": []

&amp;nbsp;&amp;nbsp;&amp;nbsp; }

&amp;nbsp; }

}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Next, we’ll integrate the Alexa Skill with our Azure bot. We’ll need two pieces of information to do this: the Alexa Skill ID and the Alexa Service Endpoint URI. First, get the Skill ID either from the URl in the Alexa portal, or by going to the Alexa Developer Console and clicking “view Skill ID”. The skill ID should be a value like ‘amzn1.ask.skil.A GUID’. Then, get the Alexa Service Endpoint URI from the Azure portal, by going to the channels page of our Azure Web App Bot in the Azure portal, and clicking on Alexa to copy the Alexa Service Endpoint URI. Then integrate as shown:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Amazon Developer Console&lt;/STRONG&gt;: After building the Alexa Skill, click on Endpoint and paste the Alexa Service Endpoint URI that we copied from the Azure portal and save the Endpoints.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Amazon Developer Console.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249354i6D082908ABD21583/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Amazon Developer Console.jpg" alt="Amazon Developer Console.jpg" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Portal:&lt;/STRONG&gt; Go to the channels page of the Azure Bot, click on Alexa, and paste the Alexa Skill ID that we copied from the Alexa Developer Console.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Alexa config settings in Azure bot service.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249355i9D0AAD3055C69612/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Alexa config settings in Azure bot service.jpg" alt="Alexa config settings in Azure bot service.jpg" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Now, we’ll download and the bot locally for testing using the &lt;A href="https://dev.botframework.com/" target="_blank" rel="noopener"&gt;Bot Framework Emulator&lt;/A&gt;. Click on “Build” in the Azure Web Bot app to download the source code locally with Bot Framework Emulator. Modify app.py as below:&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

from http import HTTPStatus

from aiohttp import web
from aiohttp.web import Request, Response, json_response
from botbuilder.core import (
    BotFrameworkAdapterSettings,
    ConversationState,
    MemoryStorage,
    UserState,
)
from botbuilder.core.integration import aiohttp_error_middleware
from botbuilder.schema import Activity

from config import DefaultConfig
from dialogs import MainDialog, groceryDialog
from bots import DialogAndWelcomeBot

from adapter_with_error_handler import AdapterWithErrorHandler

CONFIG = DefaultConfig()

# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
SETTINGS = BotFrameworkAdapterSettings(CONFIG.APP_ID, CONFIG.APP_PASSWORD)

# Create MemoryStorage, UserState and ConversationState
MEMORY = MemoryStorage()
USER_STATE = UserState(MEMORY)
CONVERSATION_STATE = ConversationState(MEMORY)

# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
ADAPTER = AdapterWithErrorHandler(SETTINGS, CONVERSATION_STATE)

# Create dialogs and Bot
RECOGNIZER = IntelligentGrocery(CONFIG)
grocery_DIALOG = groceryDialog()
DIALOG = MainDialog(RECOGNIZER, grocery_DIALOG)
BOT = DialogAndWelcomeBot(CONVERSATION_STATE, USER_STATE, DIALOG)

# Listen for incoming requests on /api/messages.
async def messages(req: Request) -&amp;gt; Response:
    # Main bot message handler.
    if "application/json" in req.headers["Content-Type"]:
        body = await req.json()
    else:
        return Response(status=HTTPStatus.UNSUPPORTED_MEDIA_TYPE)

    activity = Activity().deserialize(body)
    auth_header = req.headers["Authorization"] if "Authorization" in req.headers else ""

    response = await ADAPTER.process_activity(activity, auth_header, BOT.on_turn)
    if response:
        return json_response(data=response.body, status=response.status)
    return Response(status=HTTPStatus.OK)

APP = web.Application(middlewares=[aiohttp_error_middleware])
APP.router.add_post("/api/messages", messages)

if __name__ == "__main__":
    try:
        web.run_app(APP, host="localhost", port=CONFIG.PORT)
    except Exception as error:
        raise error
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;Next, we’ll run and test the bot with Bot Framework Emulator. From the terminal, navigate to the code folder and run pip install -r requirements.txt to install the required packages to run the bot. Once the packages are installed, run python app.py to start the bot. The bot is ready to test as shown below:&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BF Emulator test.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249358i96872251C2ACD3CA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BF Emulator test.jpg" alt="BF Emulator test.jpg" /&gt;&lt;/span&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN style="font-family: inherit;"&gt;Open the bot and add the below port number into the following URL.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BF Emulator screenshot.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249359i544E91ECBB7FE80D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="BF Emulator screenshot.png" alt="Bot Framework Emulator view" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Bot Framework Emulator view&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;Now we’re ready to add natural language understanding so the bot can understand user intent. Here, we’ll use Azure’s Language Understanding Cognitive Service (LUIS), to map user input to an “&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-intent" target="_blank" rel="noopener"&gt;intent&lt;/A&gt;” and extract “&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-entity-types" target="_blank" rel="noopener"&gt;entities&lt;/A&gt;” from the sentence. In the below illustration, the sentence “add milk and eggs to the list” is sent as a text string to the LUIS endpoint. LUIS returns the JSON seen on the right.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS diagram.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249360i41B0A2780827D409/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS diagram.png" alt="Language Understanding utterances diagram" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding utterances diagram&lt;/span&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;Use the below template to create a LUIS JSON model file where we specify intents and entities manually. After the “IntelligentGrocery” app is created in the &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;LUIS portal&lt;/A&gt; under “Import New App”, upload the JSON file with the below intents and entities.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
      "text": "access the groceries list",
      "intent": "Show",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 11,
          "endPos": 19,
          "children": []
        }
      ]
    },
    {
      "text": "add bread to the grocery list",
      "intent": "Add",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 23,
          "endPos": 29,
          "children": []
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The above sample intents are for adding items and accessing the items in the grocery list. Now, it’s your turn to add additional intents to perform the below tasks, using the &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;LUIS portal&lt;/A&gt;. Learn more about how to create the intents &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/get-started-portal-build-app" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Intents&lt;/STRONG&gt;&lt;/P&gt;
&lt;TABLE width="624"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;&lt;STRONG&gt;Name &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;CheckOff&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Mark the grocery items as purchased.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;Confirm&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Confirm the previous action.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="110"&gt;
&lt;P&gt;Delete&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="514"&gt;
&lt;P&gt;Delete items from the grocery list.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the intents and entities are added, we will need to train and publish the model so the LUIS app can recognize utterances pertaining to these grocery list actions.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Portal.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249361i33AF87C4EED5EBA9/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Portal.png" alt="Language Understanding (LUIS) Portal" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding (LUIS) Portal&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;After the model has been published in the LUIS portal, click ‘Access your endpoint Urls’ and copy the primary key, example query and endpoint URL for the prediction resource.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Build endpoint.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249363i329D39E3A396449C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Build endpoint.png" alt="Language Understanding endpoint" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding endpoint&lt;/span&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS prediction resource.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249366i3EFD563A9B93CDBF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS prediction resource.png" alt="Language Understanding (LUIS) Prediction view" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language Understanding (LUIS) Prediction view&lt;/span&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Navigate to the Settings page in the LUIS portal to retrieve the App ID.&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LUIS Settings APP ID.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249367iCEEE2DCF7CCB339A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="LUIS Settings APP ID.png" alt="Application settings" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Application settings&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL start="9"&gt;
&lt;LI&gt;Finally, test your Language Understanding model. The endpoint URL will be in the below format, with your own custom subdomain, and app ID and endpoint key replacing APP-ID, and KEY_ID. Go to the end of the URL and enter an intent; for example, “get me all the items from the grocery list”. The JSON result will identify the top scoring intent and prediction with a confidence score. This is a good test to see if LUIS can learn what should be predicted with the intent.&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE width="625"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="625"&gt;
&lt;P&gt;&lt;A href="https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/APP-ID/slots/production/predict?subscription-key=KEY-ID&amp;amp;verbose=true&amp;amp;show-all-intents=true&amp;amp;log=true&amp;amp;query=YOUR_QUERY_HERE" target="_blank" rel="noopener"&gt;https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/APP-ID/slots/production/predict?subscription-key=KEY-ID&amp;amp;verbose=true&amp;amp;show-all-intents=true&amp;amp;log=true&amp;amp;query=YOUR_QUERY_HERE&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Additional Ideas&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;We’ve now seen how to build a voice bot leveraging Azure services to automate a common task. We hope it gives you a good starting point towards building bots for other scenarios as well. Try out some of the ideas below to continue building upon your bot and exploring additional Azure AI services.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add Google Home assistant as an additional channel to receive voice commands.&lt;/LI&gt;
&lt;LI&gt;Add a PictureBot extension to your bot and add pictures of your grocery items. You will need to create intents that trigger actions that the bot can take, and create entities that require these actions. For example, an intent for the PictureBot may be “SearchPics”. This could trigger Azure Cognitive Search to look for photos, using a “facet” entity to know what to search for. See what other functionality you can come up with!&lt;/LI&gt;
&lt;LI&gt;Use &lt;A href="https://www.qnamaker.ai/" target="_blank" rel="noopener"&gt;Azure QnA maker&lt;/A&gt; to enable your bot to answer FAQs from a knowledge base. Add a bit of personality using the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/chit-chat-knowledge-base?tabs=v1" target="_blank" rel="noopener"&gt;chit-chat&lt;/A&gt; feature.&lt;/LI&gt;
&lt;LI&gt;Integrate &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/" target="_blank" rel="noopener"&gt;Azure Personalizer&lt;/A&gt; with your voice chatbot to enables the bot to recommend a list of products to the user, providing a personalized experience.&lt;/LI&gt;
&lt;LI&gt;Include &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview" target="_blank" rel="noopener"&gt;Azure Speech service&lt;/A&gt; to give your bot a custom, high quality voice, with 200+ Text to Speech options across 54 different locales/languages, as well as customizable Speech to Text capabilities to process voice inputs.&lt;/LI&gt;
&lt;LI&gt;Try building this bot using &lt;A style="font-family: inherit; background-color: #ffffff;" href="https://docs.microsoft.com/en-us/composer/introduction" target="_blank" rel="noopener"&gt;Bot Framework Composer&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt;, a visual authoring canvas.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 00:30:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-voice-enabled-grocery-chatbot-with-azure-ai/ba-p/2096079</guid>
      <dc:creator>wmendoza</dc:creator>
      <dc:date>2021-01-26T00:30:09Z</dc:date>
    </item>
    <item>
      <title>How to build an intelligent travel journal using Azure AI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-an-intelligent-travel-journal-using-azure-ai/ba-p/2095168</link>
      <description>&lt;P&gt;AI capabilities can enhance many types of applications, enabling you to improve your customer experience and solve complex problems. With Azure Cognitive Services, you can easily access and customize industry-leading AI models, using the tools and languages of your choice.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll walk through an exercise which you can complete in under an hour, to get started using Azure AI Services. Many of us are dreaming of traveling again, and building this intelligent travel journal app can help you capture memories from your next trip, whenever that may be. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application&lt;/U&gt;&lt;/STRONG&gt;&lt;U&gt;:&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-align-left"&gt;Capture voice memos, voice tag photos, and transcribe speech to text.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Automatically tag your photos based on key phrase extraction and analysis of text in pictures.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Translate tags and text into desired language.&lt;/LI&gt;
&lt;LI class="lia-align-left"&gt;Organize your memos by key phrase and find similar travel experiences you enjoyed with AI-powered search.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="travel blog app image.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249197i5D7A090CB0DC4851/image-size/medium?v=v2&amp;amp;px=400" role="button" title="travel blog app image.jpg" alt="travel blog app image.jpg" /&gt;&lt;/span&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/?OCID=AID3024570" target="_self"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://ms.portal.azure.com/?OCID=AID3024570" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;To run the provided &lt;A href="https://github.com/Azure-Samples/AIDeveloperResources" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;, you will need &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fvsdownload&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638687346%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=cKXR10KgmYmjZ8k5vFnzlNUcZGMl38oqoXHwsILIKj4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Visual Studio 2019&lt;/A&gt; and &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdotnet.microsoft.com%2Flearn%2Fdotnet%2Fhello-world-tutorial%2Fintro&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638697298%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=IPinEHwdLDuphf1OMth%2BmbGGiD0Sgy5qk95jAzHLTTA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;.NET Core 3.1&lt;/A&gt; or above (for FotoFly)&lt;/LI&gt;
&lt;LI&gt;Refer to this &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fdotnet%2Fcore%2Ftutorials%2Fpublishing-with-visual-studio&amp;amp;data=04%7C01%7CMadison.Butzbach%40microsoft.com%7Cf2e19207835247d0176308d8bde3a4d5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637468134638697298%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=%2BG5LCd6x4TCjwpvzkzqn1szVyALEW94EaxVd1eFeqww%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;tutorial&lt;/A&gt; for detailed guidance on how to publish a console app.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Key Azure technologies:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Speech Service &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription" target="_blank" rel="noopener"&gt;batch transcription&lt;/A&gt; for speech to text transcription&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/" target="_blank" rel="noopener"&gt;Text Analytics&lt;/A&gt; for key phrase/intent extraction&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/" target="_blank" rel="noopener"&gt;Computer Vision&lt;/A&gt; for analyzing text in images&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/translator/reference/v3-0-translate" target="_blank" rel="noopener"&gt;Translator&lt;/A&gt; to normalize tags/text into desired language.&lt;/LI&gt;
&lt;LI&gt;Open Source &lt;A href="http://www.java2s.com/Open-Source/CSharp_Free_Code/Windows_Presentation_Foundation_Library/Download_Fotofly_Photo_Metadata_Library.htm" target="_blank" rel="noopener"&gt;FotoFly&lt;/A&gt; library for photo tagging. Alternatively, you can use blob metadata but functionality will be limited.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/search/" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; for AI-powered search.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;NOTE:&amp;nbsp; &lt;EM&gt;For more information refer to the “&lt;U&gt;References.txt&lt;/U&gt;” file under respective folders within JournalHelper library project in the provided sample solution with this blog.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Solution Architecture&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="travel blog architecture image.png" style="width: 699px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249198i15CF06412667F92D/image-size/large?v=v2&amp;amp;px=999" role="button" title="travel blog architecture image.png" alt="travel blog architecture image.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;User records a voice memo; for example, to accompany an image they’ve captured. The recorded file is stored in a file repository (alternatively, you could use a &lt;A href="https://azure.microsoft.com/solutions/databases" target="_blank" rel="noopener"&gt;database&lt;/A&gt;).&lt;/LI&gt;
&lt;LI&gt;The recorded voice memo (e.g. .m4a) is converted into desired format (e.g. .wav), using Azure’s Speech Service batch transcription capability.&lt;/LI&gt;
&lt;LI&gt;The folder containing voice memos is uploaded to a Blob container.&lt;/LI&gt;
&lt;LI&gt;Images are uploaded into a separate container for analysis of any text within the photos, using Azure Computer Vision.&lt;/LI&gt;
&lt;LI&gt;Use Translator to translate text to different languages, as needed. This may be useful to translate foreign street signs, menus, or other text in images.&lt;/LI&gt;
&lt;LI&gt;Extract tags from the generated text files using Text Analytics, and send tags back to the corresponding image file. Tags can be travel related (#milan, #sunset, #Glacier National Park), or based on geotagging metadata, photo metadata (camera make, exposure, ISO), and more.&lt;/LI&gt;
&lt;LI&gt;Create a search indexer with Azure Cognitive Search, and use the generated index to search your intelligent travel journal.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Implementation&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Sample code&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The entire solution code is available for download at this &lt;A href="https://github.com/Azure-Samples/AIDeveloperResources" target="_blank" rel="noopener"&gt;link.&lt;/A&gt; Download/clone and follow instructions in ReadMe.md solution item for further setup.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Implementation summary&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The sample is implemented using various client libraries and samples available for Azure Cognitive Services. All these services are grouped together into a helper library project named “journalhelper”. In the library we introduce a helper class to help with scenarios that combine various Cognitive Services to achieve desired functionality.&lt;/P&gt;
&lt;P&gt;We use “.Net Core console app” as the front end to test the scenarios. This sample also uses another open source library (FotoFly), which is ported to .Net Core here, to access and edit image metadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps, along with sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Start by batch transcribing voice memos and extracting key tags from the text output. Group the input voice memos into a folder, upload them into an Azure Blob container or specify a list of their URls, and use batch transcription to get results back into the Azure Blob container, as well as a folder in your file system. The following code snippet illustrates how helper functions can be grouped together for a specific functionality. It combines local file system, Azure storage containers, and Cognitive Services speech batch transcription API.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;Console.WriteLine("Uploading voice memos folder to blob container...");
Helper.UploadFolderToContainer(
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.VoiceMemosFolder),
customSettings.AzureBlobContainers.InputVoiceMemoFiles, deleteExistingContainer);
Console.WriteLine("Branch Transcribing voice memos using containers...");
//NOTE: Turn the pricing tier for Speech Service to standard for this below to work.

await Helper.BatchTranscribeVoiceMemosAsync(
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
          customSettings.SpeechConfigSettings.Key,
          customSettings.SpeechConfigSettings.Region);

Console.WriteLine("Extract transcribed text files into another container and folder, delete the intermediate container with json files...");

await Helper.ExtractTranscribedTextfromJsonAsync(
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.ExtractedTranscribedTexts,
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.BatchTranscribedFolder), true);
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Next, create tags from the transcribed text. Sample helper function using the Text Analytics client library is listed below.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;//text analytics
public static void CreateTagsForFolderItems(string key, string endpoint, string batchTranscribedFolder, string extractedTagsFolder)
{
    if (!Directory.Exists(batchTranscribedFolder))
    {
       Console.WriteLine("Input folder for transcribed files does not exist");
       return;
    }

    // ensure destination folder path exists
    Directory.CreateDirectory(extractedTagsFolder);
    TextAnalyticsClient textClient = TextAnalytics.GetClient(key, endpoint);

    var contentFiles = Directory.EnumerateFiles(batchTranscribedFolder);
    foreach(var contentFile in contentFiles
    {
var tags = TextAnalytics.GetTags(textClient, 
contentFile).ConfigureAwait(false).GetAwaiter().GetResult();

// generate output file with tags 
string outFileName = Path.GetFileNameWithoutExtension(contentFile);
                outFileName += @"_tags.txt";
string outFilePath = Path.Combine(extractedTagsFolder, outFileName);
File.WriteAllLinesAsync(outFilePath, tags).Wait() ;
    }
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The actual client library or service calls are made as shown:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;static public async Task&amp;lt;IEnumerable&amp;lt;string&amp;gt;&amp;gt; GetTags(TextAnalyticsClient 
client, string inputTextFilePath)
{
   string inputContent = await File.ReadAllTextAsync(inputTextFilePath);
   var entities = EntityRecognition(client, inputContent);
   var phrases = KeyPhraseExtraction(client, inputContent);
   var tags = new List&amp;lt;string&amp;gt;();
   tags.AddRange(entities);
   tags.AddRange(phrases);
   return tags;
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Update tags to the photo/image file, using the open source FotoFly library.&amp;nbsp; Alternatively, you can update the Blob metadata with these tags and include that in the search index, but the functionality will be limited to using Azure Blob storage.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;string taggedPhotoFile = photoFile.Replace(inputPhotosFolder,    
      OutPhotosFolder);
File.Copy(photoFile, taggedPhotoFile, true);

if (tags.Count &amp;gt; 0)
{
    ImageProperties.SetPhotoTags(taggedPhotoFile, tags);
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Other useful functions to complete the scenario are:
&lt;OL&gt;
&lt;LI&gt;Helper.ProcessImageAsync, and&lt;/LI&gt;
&lt;LI&gt;Helper.TranslateFileContent&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The first one can be used to extract text from images using OCR or regular text processing using Computer Vision. The second can detect the source language, translate using Azure’s Translator service into the desired output language, and then create more tags for an image file.&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Finally, use Azure Cognitive Search to create an index from the extracted text files saved in the Blob container, enabling you to search for documents and create journal text files. For example, you can search for images by cities or countries visited, date, or even cuisines. You can also search for images by camera-related metadata or geolocation.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In this sample we have demonstrated simple built-in skillsets for entity and language detection. The solution can be further enhanced by adding additional data sources to process tagged images and their metadata, and adding additional information to the searches.&lt;/P&gt;
&lt;P&gt;NOTE:&amp;nbsp; &lt;EM&gt;The helper functions can be made more generic to take additional skillset input.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;public static async Task CreateSearchIndexerAsync(
    string serviceAdminKey, string searchSvcUrl,
    string cognitiveServiceKey,
    string indexName, string jsonFieldsFilePath,
    string blobConnectionString, string blobContainerName
    )
{
    // Its a temporary arrangment.  This function is not complete
    IEnumerable&amp;lt;SearchField&amp;gt; fields = SearchHelper.LoadFieldsFromJSonFile(jsonFieldsFilePath);

    // create index
    var searchIndex = await 
Search.Search.CreateSearchIndexAsync(serviceAdminKey, 
searchSvcUrl, indexName, fields.ToList());

    // get indexer client
    var indexerClient = 
Search.Search.GetSearchIndexerClient(serviceAdminKey, searchSvcUrl);

    // create azure blob data source
    var dataSource = await 
Search.Search.CreateOrUpdateAzureBlobDataSourceAsync(indexerClient, 
blobConnectionString, indexName, blobContainerName);

    // create indexer

    // create skill set with minimal skills
    List&amp;lt;SearchIndexerSkill&amp;gt; skills = new List&amp;lt;SearchIndexerSkill&amp;gt;();
            skills.Add(Skills.CreateEntityRecognitionSkill());
            skills.Add(Skills.CreateLanguageDetectionSkill());
     var skillSet = await 
Search.Search.CreateOrUpdateSkillSetAsync(indexerClient,
             indexName + "-skillset", skills, cognitiveServiceKey);

     var indexer = await Search.Search.CreateIndexerAsync(indexerClient, 
dataSource, skillSet, searchIndex);

     // wait for some time to have indexer run and load documents
     Thread.Sleep(TimeSpan.FromSeconds(20));

     await Search.Search.CheckIndexerOverallStatusAsync(indexerClient, 
             indexer);
}
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, search documents and generate the corresponding journal files, utilizing the following functions:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Helper.SearchDocuments&lt;/LI&gt;
&lt;LI&gt;Helper.CreateTravelJournal&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Additional Ideas&lt;/H2&gt;
&lt;P&gt;In addition to the functionality described so far, there are many other ways you can &amp;nbsp;leverage Azure AI to further enhance your intelligent travel journal and learn more advanced scenarios. We encourage you to explore some the following ideas to enrich your app:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add real time voice transcription and store transcriptions in an &lt;A href="https://azure.microsoft.com/solutions/databases/" target="_blank" rel="noopener"&gt;Azure managed database&lt;/A&gt;, to correlate voice transcription with images in context.&lt;/LI&gt;
&lt;LI&gt;Include travel tickets and receipts as images for OCR-based image analysis (&lt;A href="https://azure.microsoft.com/en-gb/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;) and include them as journal artifacts.&lt;/LI&gt;
&lt;LI&gt;Use multiple data sources for a given search index. We have simplified and only included text files to index in this sample, but you can include the tagged photos from a different data source for the same search index.&lt;/LI&gt;
&lt;LI&gt;Add custom skills and data extraction for &lt;A href="https://docs.microsoft.com/en-us/azure/search/search-indexer-overview" target="_blank" rel="noopener"&gt;search indexer&lt;/A&gt;. Extract metadata from images and include as search content.&lt;/LI&gt;
&lt;LI&gt;Extract metadata from video and audio content using &lt;A href="https://azure.microsoft.com/services/media-services/video-indexer/" target="_blank" rel="noopener"&gt;Video Indexer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Experiment with &lt;A href="https://www.luis.ai/" target="_blank" rel="noopener"&gt;Language Understanding&lt;/A&gt; and generate more elaborate and relevant search content based on top scoring intents and entities. Sample keywords and questions related to current sample data are included in Objectives.docx solution item.&lt;/LI&gt;
&lt;LI&gt;Build a consumer front-end app that stitches all of this together and displays the journal in a UI.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 00:26:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-an-intelligent-travel-journal-using-azure-ai/ba-p/2095168</guid>
      <dc:creator>maddybutzbach</dc:creator>
      <dc:date>2021-01-26T00:26:41Z</dc:date>
    </item>
    <item>
      <title>How to build a personal finance app using Azure</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/ba-p/2088995</link>
      <description>&lt;P&gt;AI allows you to deliver breakthrough experiences in your apps. With Azure Cognitive Services, you can easily customize and deploy the same AI models that power Microsoft’s products, such as Xbox and Bing, using the tools and languages of your choice.&lt;/P&gt;
&lt;P&gt;In this blog we will walk through an exercise that you can complete in under an hour and learn how to build an application that can be useful for you, all while exploring a set of Azure services. If you have ever wanted to get your financial transactions in order, look no further. With this exercise, we’ll explore how to quickly take a snap of a receipt from your phone and upload it for categorization, creating expense reports, and to gain insights to your spending. Remember, even though we’ll walk you through each step, you can always explore the sample code and get creative with your own unique solution!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Features of the application:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Snap a picture of your receipt and upload it using your smartphone&lt;/LI&gt;
&lt;LI&gt;Extract relevant data from the images: Who issued the receipt? What was the total amount? What was purchased? All of this information can be effortlessly stored for exploration&lt;/LI&gt;
&lt;LI&gt;Query the data: bring your receipts to life by extracting relevant and insightful information&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Prerequisites&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If you don't have an Azure subscription, create a &lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;free account&lt;/A&gt; before you begin. If you have a subscription, log in to the &lt;A href="https://azure.microsoft.com/en-us/features/azure-portal/" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;You will need to have &lt;A href="https://www.python.org/downloads/" target="_blank" rel="noopener"&gt;python&lt;/A&gt; installed locally to run some of the samples.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;Key&amp;nbsp;Azure technologies:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Azure Form Recognizer&lt;/A&gt; scans image documents with optical character recognition and extracts text, key/value pairs, and tables from documents, receipts, and forms.&lt;/LI&gt;
&lt;LI&gt;Form Recognizer’s &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts?tabs=v2-0" target="_blank" rel="noopener"&gt;prebuilt receipt model&lt;/A&gt; specifically extracts receipt data&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/storage/" target="_blank" rel="noopener"&gt;Azure Blob Storage&lt;/A&gt; is used to store data&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_blank" rel="noopener"&gt;Azure Cognitive Search&lt;/A&gt; enriches the data by making it easily identifiable&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Solution Architecture&lt;/H2&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ReceiptUploaderSolutionArchitecture.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248818iEB5D2E8135093DAA/image-size/large?v=v2&amp;amp;px=999" role="button" title="ReceiptUploaderSolutionArchitecture.png" alt="ReceiptUploaderSolutionArchitecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;U&gt;App Architecture Description:&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;User uploads a receipt image from their mobile device&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;The uploaded image is verified and then sent to the Azure Form Recognizer to extract information&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The image is analysed by the REST API within the Form Recognizer prebuilt receipt model&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;A JSON is returned that has both the text information and bounding box coordinates of the extracted receipt data&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;The resulting JSON is parsed and a simpler JSON is formed, saving only the relevant information needed&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;This receipt JSON is then stored in Azure Blob Storage &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Azure Cognitive Search points directly to Azure Blob Storage and is used to index the data &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The application queries this search index to extract relevant information from the receipts&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Another visual of the flow of data within the solution architecture is shown below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="FlowChart.png" style="width: 602px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249275i243E605E6E4CDDD1/image-size/large?v=v2&amp;amp;px=999" role="button" title="FlowChart.png" alt="FlowChart.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we’ve explored the technology and services we’ll be using, let’s dive into building our app!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Implementation&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;To get started, data from receipts must be extracted; this is done by setting up the Form Recognizer service in Azure and connecting to the service to use the relevant API for receipts. A JSON is returned that contains the information extracted from receipts and is stored in Azure Blob Storage to be used by Azure Cognitive Search. Cognitive Search is then utilized to index the receipt data, and to search for relevant information.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;High level overview of steps, along with sample code snippets for illustration:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to the Azure portal and&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" target="_blank" rel="noopener"&gt;create a new Form Recognizer resource&lt;/A&gt;&lt;/SPAN&gt;. In the&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;pane, provide the following information:&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;A descriptive name for your resource.&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Subscription&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;Select the Azure subscription which has been granted access.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Location&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Pricing Tier&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The cost of your resource depends on the pricing tier you choose and your usage. For more information, see the API&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://azure.microsoft.com/pricing/details/cognitive-services/" target="_blank" rel="noopener"&gt;pricing details&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="123"&gt;
&lt;P&gt;&lt;STRONG&gt;Resource Group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="479"&gt;
&lt;P&gt;The&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group" target="_blank" rel="noopener"&gt;Azure resource group&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;that will contain your resource. You can create a new group or add it to a pre-existing group.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;After Form Recognizer deploys, go to All Resources and locate the newly deployed resource. Save the key and endpoint from the resource’s key and endpoint page somewhere so you can access it later. &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;You can use the following &lt;SPAN&gt;&lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeReceiptAsync" target="_blank" rel="noopener"&gt;Analyze Receipt API&lt;/A&gt;&lt;/SPAN&gt; to start analyzing the receipt. Remember to replace &amp;lt;endpoint&amp;gt; &amp;amp; &amp;lt;subscription key&amp;gt; the values you saved earlier and replace &amp;lt;path to your receipt&amp;gt; with the local path to your scanned receipt image.&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Analyse script

import json
import time
from requests import get, post

# Endpoint URL
endpoint = r"&amp;lt;endpoint url&amp;gt;"
apim_key = "&amp;lt;subscription key&amp;gt;"
post_url = endpoint + "/formrecognizer/v2.0/prebuilt/receipt/analyze"
source = r"&amp;lt;path to your receipt&amp;gt;"

headers = {
    # Request headers
    'Content-Type': 'image/jpeg',
    'Ocp-Apim-Subscription-Key': apim_key,
}

params = {
    "includeTextDetails": True
}

with open(source, "rb") as f:
    data_bytes = f.read()

try:
    resp = post(url=post_url, data=data_bytes, headers=headers, params=params)
    if resp.status_code != 202:
        print("POST analyze failed:\n%s" % resp.text)
        quit()
    print("POST analyze succeeded:\n%s" % resp.headers)
    get_url = resp.headers["operation-location"]
except Exception as e:
    print("POST analyze failed:\n%s" % str(e))
    quit()
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;If you run this code and everything is as it should be, you'll receive a&amp;nbsp;&lt;STRONG&gt;202 (Success)&lt;/STRONG&gt;&amp;nbsp;response that includes an&amp;nbsp;&lt;STRONG&gt;Operation-Location&lt;/STRONG&gt;&amp;nbsp;header, which the script will print to the console. This header contains an &lt;STRONG&gt;operation id&lt;/STRONG&gt; that you can use to query the status of the asynchronous operation and get the results. In the following example value, the string after&amp;nbsp;operations/&amp;nbsp;is the operation ID.&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="601"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://cognitiveservice/formrecognizer/v2.0/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb" target="_blank" rel="noopener"&gt;https://cognitiveservice/formrecognizer/v2.0/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;Now you can call the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/GetAnalyzeReceiptResult" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Get Analyze Receipt Result&lt;/STRONG&gt;&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;API to get the Extracted Data.&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;# Get results.
n_tries = 10
n_try = 0
wait_sec = 6
while n_try &amp;lt; n_tries:
    try:
        resp = get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
        resp_json = json.loads(resp.text)
        if resp.status_code != 200:
            print("GET Receipt results failed:\n%s" % resp_json)
            quit()
        status = resp_json["status"]
        if status == "succeeded":
            print("Receipt Analysis succeeded:\n%s" % resp_json)
            quit()
        if status == "failed":
            print("Analysis failed:\n%s" % resp_json)
            quit()
        # Analysis still running. Wait and retry.
        time.sleep(wait_sec)
        n_try += 1
    except Exception as e:
        msg = "GET analyze results failed:\n%s" % str(e)
        print(msg)
        quit()
​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This code uses the operation id and makes another API call. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="6"&gt;
&lt;LI&gt;The JSON that is returned can be examined to get the required information - ‘readResults’ field will contain all lines of text that was decipherable, and the ‘documentResults’ field contains ‘key/value’ information for the most relevant parts of the receipt (e.g. the merchant, total, line items etc.)&lt;SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;The receipt image below, &lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt; &lt;SPAN&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="contosoReceipt.jpg" style="width: 496px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249243iEE3167D5881B66D2/image-size/large?v=v2&amp;amp;px=999" role="button" title="contosoReceipt.jpg" alt="contosoReceipt.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
resulted in the JSON from which we have extracted the following details: &lt;SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;LI-CODE lang="json"&gt; MerchantName: THE MAD HUNTER 
 TransactionDate: 2020-08-23 
 TransactionTime: 22:07:00 
 Total: £107.10 &lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="7"&gt;
&lt;LI&gt;We will now create a JSON from all the data extracted from the analysed receipt. The structure of the JSON is shown below:&lt;BR /&gt;&lt;LI-CODE lang="python"&gt;{
   "id":"INV001",
   "user":"Sujith Kumar",
   "createdDateTime":"2020-10-23T17:16:32Z",
   "MerchantName":"THE MAD HUNTER",
   "TransactionDate":"2020-10-23",
   "TransactionTime":"22:07:00",
   "currency":"GBP",
   "Category":"Entertainment",
   "Total":"107.10",
   "Items":[	]
}​&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;We can now save this JSON and build a search service to extract the information we want from it.&lt;/P&gt;
&lt;P&gt;Before continuing onto step 8, you must have an Azure Storage Account with Blob storage.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="8"&gt;
&lt;LI&gt;We will now save the JSON files in an &lt;STRONG&gt;Azure Blob Storage&lt;/STRONG&gt; container and use it as a source for the &lt;STRONG&gt;Azure Cognitive Search Service Index&lt;/STRONG&gt; that we will create. &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Sign-in to the Azure Portal and search for "Azure Cognitive Search" or navigate to the resource through&amp;nbsp;&lt;STRONG&gt;Web&lt;/STRONG&gt;&amp;nbsp;&amp;gt;&amp;nbsp;&lt;STRONG&gt;Azure Cognitive Search&lt;/STRONG&gt;. Follow the steps to:&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;Choose a subscription&lt;/LI&gt;
&lt;LI&gt;Set a resource group&lt;/LI&gt;
&lt;LI&gt;Name the service appropriately&lt;/LI&gt;
&lt;LI&gt;Choose a location&lt;/LI&gt;
&lt;LI&gt;Choose a pricing tier for this service&lt;/LI&gt;
&lt;LI&gt;Create your service&lt;/LI&gt;
&lt;LI&gt;Get a key and URL endpoint &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We will use the free Azure service, which means you can create three indexes, three data sources and three indexers. The dashboard will show you how many of each you have left. For this exercise you will create one of each.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="10"&gt;
&lt;LI&gt;In the portal, find the search service you created above and click &lt;STRONG&gt;Import data&lt;/STRONG&gt; on the command bar to start the wizard. In the wizard, click on Connect to your data and specify the name, type, and connection information. Skip the ‘Enrich Content’ page and go to &lt;STRONG&gt;Customize Target Index.&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;For this exercise, we will use the wizard to generate a basic index for our receipt data. Minimally, an index requires a name and a fields collection; one of the fields should be marked as the document key to uniquely identify each document.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Fields have data types and attributes. The check boxes across the top are&amp;nbsp;&lt;EM&gt;index attributes&lt;/EM&gt;&amp;nbsp;controlling how the field is used.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Retrievable&lt;/STRONG&gt;&amp;nbsp;means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Key&lt;/STRONG&gt;&amp;nbsp;is the unique document identifier. It's always a string, and it is required.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Filterable&lt;/STRONG&gt;,&amp;nbsp;&lt;STRONG&gt;Sortable&lt;/STRONG&gt;, and&amp;nbsp;&lt;STRONG&gt;Facetable&lt;/STRONG&gt;&amp;nbsp;determine whether fields are used in a filter, sort, or faceted navigation structure.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Searchable&lt;/STRONG&gt;&amp;nbsp;means that a field is included in full text search. Only Strings are searchable.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Make sure you choose the following fields:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;id&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;user&lt;/LI&gt;
&lt;LI&gt;createdDateTime&lt;/LI&gt;
&lt;LI&gt;MerchantName&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;TransactionDate&lt;/LI&gt;
&lt;LI&gt;TransactionTime&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Currency&lt;/LI&gt;
&lt;LI&gt;Category&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Total&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="12"&gt;
&lt;LI&gt;Still in the&amp;nbsp;&lt;STRONG&gt;Import data&lt;/STRONG&gt;&amp;nbsp;wizard, click&amp;nbsp;&lt;STRONG&gt;Indexer&lt;/STRONG&gt;&amp;nbsp;&amp;gt;&amp;nbsp;&lt;STRONG&gt;Name&lt;/STRONG&gt;, and type a name for the indexer.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This object defines an executable process. For now, use the default option (&lt;STRONG&gt;Once&lt;/STRONG&gt;) to run the indexer once, immediately.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="13"&gt;
&lt;LI&gt;Click&amp;nbsp;&lt;STRONG&gt;Submit&lt;/STRONG&gt;&amp;nbsp;to create and simultaneously run the indexer.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Soon you should see the newly created indexer in the list, with status indicating "in progress" or success, along with the number of documents indexed.&lt;/P&gt;
&lt;P&gt;The main service page provides links to the resources created in your Azure Cognitive Search service. To view the index you just created, click&amp;nbsp;&lt;STRONG&gt;Indexes&lt;/STRONG&gt;&amp;nbsp;from the list of links.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="step13.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248821iFBF89B5FE72FA88E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="step13.png" alt="step13.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="14"&gt;
&lt;LI&gt;Click on the index (&lt;EM&gt;azureblob-indexer&lt;/EM&gt; in this case) from the list of links and view the index-schema.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Now you should have a search index that you can use to query the receipt data that’s been extracted from the uploaded receipts.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="15"&gt;
&lt;LI&gt;Click the search explorer&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="15.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249238i4DD762412E1DBF15/image-size/medium?v=v2&amp;amp;px=400" role="button" title="15.png" alt="15.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="16"&gt;
&lt;LI&gt;From the index drop down choose the relevant index. Choose the default API Version (2020-06-30) for this exercise.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="16.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/249276i6A72C8DAABEBDD28/image-size/medium?v=v2&amp;amp;px=400" role="button" title="16.png" alt="16.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL start="17"&gt;
&lt;LI&gt;In the search bar paste a query string (for eg. &lt;STRONG&gt;category='Entertainment'&lt;/STRONG&gt;)&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;You will get results as verbose JSON documents as shown below:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="17.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/248827iD8A00676A9051ED7/image-size/large?v=v2&amp;amp;px=999" role="button" title="17.png" alt="17.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that you have built a query indexer and aimed it at your data you can now use it to build queries programmatically and extract information to answer some of the following questions:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;How much did I spend last Thursday?&lt;/LI&gt;
&lt;LI&gt;How much have I spent on entertainment over the last quarter?&lt;/LI&gt;
&lt;LI&gt;Did I spend anything at ‘The Crown and Pepper’ last month?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Additional Ideas&lt;/H2&gt;
&lt;P&gt;In addition to the services and functionalities used throughout this exercise, there are numerous other ways you can use Azure AI to build in support for all kinds of receipts or invoices. For example, the logo extractor can be used to identify logos of popular restaurants or hotel chains, and the business card model can ingest business contact information just as easily as we saw with receipts.&lt;/P&gt;
&lt;P&gt;We encourage you to explore some of the following ideas to enrich your application:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Search invoices for specific line items&lt;/LI&gt;
&lt;LI&gt;Train the models to recognize different expense categories such as entertainment, supplies, etc.&lt;/LI&gt;
&lt;LI&gt;Add &lt;A href="https://docs.microsoft.com/en-gb/azure/cognitive-services/LUIS/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Language Understanding (LUIS)&lt;/STRONG&gt;&lt;/A&gt; to ask your app questions in natural language and extract formatted reports&lt;/LI&gt;
&lt;LI&gt;Add &lt;A href="https://azure.microsoft.com/services/cognitive-services/qna-maker/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Azure QnA Maker&lt;/STRONG&gt;&lt;/A&gt; to your app and get insights such as how much you spent on entertainment last month, or other categories of insights you’d like to explore&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 Jan 2021 01:19:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-a-personal-finance-app-using-azure/ba-p/2088995</guid>
      <dc:creator>mernanashed</dc:creator>
      <dc:date>2021-01-26T01:19:08Z</dc:date>
    </item>
    <item>
      <title>Enhanced Table Extraction from documents with Form Recognizer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011</link>
      <description>&lt;P&gt;&lt;EM&gt;Authors: Lei Sun, Neta Haiby, Cha Zhang, Sanjeev Jagtap&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Documents containing tables pose a major hurdle for information extraction. Tables are often found in financial documents, legal documents, insurance documents, oil and gas documents and more. Tables in documents are often the most important part of the document but extracting data from tables in documents presents a unique set of challenges.&amp;nbsp;Challenges include an accurate detection of the tabular region within an image, and subsequently detecting and extracting information from the rows and columns of the detected table, merged cells, complex tables, nested tables and more. Table extraction is the task of detecting the tables within the document and extracting them into a structured output that can be consumed by workflow applications such as robotic process automation (RPA) services, data analyst tools such as excel, databases and search services.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Table-slides.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246195i55F9B8A11D006D71/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table-slides.gif" alt="Table-slides.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Customers often use manual processes for data extraction and digitization. However, with the new enhanced table extraction feature you can send a document (PDF or images) to Form Recognizer for extraction of all the information into a structured usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246196i19F9FDF96FA549FE/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 1.png" alt="Table Blog 1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Table extraction challenges&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Table extraction from a wide variety of document images is a challenging problem due to the heterogeneous table structures, diverse table contents, and erratic use of ruling lines. To name a few concrete examples, in financial reports and technical publications, some borderless tables may have complex hierarchical header structures, contain many multi-line, empty or spanned cells, or have large blank spaces between neighboring columns. In forms, some tables may be embedded in other more complex tabular objects (e.g., nested tables) and some neighboring tables may be very close to each other which makes it hard to determine whether they should be merged or not. In invoices, tables may have different sizes, e.g., some key-value pairs composed tables may contain only two rows/columns and some line-item tables may span multiple pages. Sometimes, some objects in document images like figures, graphics, code listings, structurally laid out text, or flow charts may have similar textures as tables, which poses another significant challenge for successful detection of tables and reduction of false alarms. To make matters worse, many scanned or camera-captured document images are of poor image quality, and tables contained in them may be distorted (even curved) or contain artifacts or noises.&amp;nbsp; Existing table extraction solutions fall short of extracting tables from such document images with high accuracy, which has prevented workflow applications from effectively leveraging this technology.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246198i47D4AA2499B81261/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 2.png" alt="Table Blog 2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Form Recognizer Table extraction &lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;In recent years, the success of deep learning in various computer vision applications has motivated researchers to explore deep neural networks like convolutional neural networks (CNN) or graph neural networks (GNN) for detecting tables and recognizing table structures from document images. With these new technologies, the capability and performance of modern table extraction solutions have been improved significantly.&lt;/P&gt;
&lt;P&gt;In the latest release of Form Recognizer, we created a state-of-the-art table extraction solution with cutting-edge deep learning technology. After validating that Faster/Mask R-CNN based table detectors are effective in detecting a variety of tables (e.g., bordered or borderless tables, tables embedded in other more complex tabular objects, and distorted tables) in document images robustly, we further proposed a new method to improve the localization accuracy of such detectors, and achieved state-of-the-art results on the &lt;A href="https://github.com/cndplab-founder/ICDAR2019_cTDaR" target="_blank" rel="noopener"&gt;ICDAR-2019 cTDaR table detection benchmark dataset&lt;/A&gt; by only using a lightweight ResNet18 backbone network (Table 1).&lt;/P&gt;
&lt;P&gt;For the challenge of table recognition or table cell extraction, we leveraged existing CNN/GNN based approaches, which have proven to be robust to complex tables like borderless tables with complex hierarchical header structures and multi-line/empty/spanned cells. We further enhanced them to deal with distorted or even slightly curved tables in camera-captured document images, making the algorithm more widely applicable to different real-world scenarios. Figure 1 below shows a few examples to demonstrate such capabilities.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246199i494D4508F59AFFD0/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 3.png" alt="Table Blog 3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Easy and Simple to use&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Try it out with the &lt;A href="https://fott-preview.azurewebsites.net/layout-analyze" target="_blank" rel="noopener"&gt;Form Recognizer Sample Tool.&amp;nbsp;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 5.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246263i813A5841DE546599/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 5.png" alt="Table Blog 5.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Extracting tables from documents is as simple as 2 API calls, no training, preprocessing, or anything else needed. Just call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync" target="_blank" rel="noopener"&gt;Analyze Layout&amp;nbsp;operation&lt;/A&gt; with your document (image, TIFF, or PDF file) as the input and extracts the text, tables, selection marks, and structure of the document.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;: &lt;STRONG&gt;The Analyze Layout Operation – &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;https://{endpoint}/formrecognizer/v2.1-preview.2/layout/analyze&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The Analyze Layout call returns a response header field called&amp;nbsp;Operation-Location. The&amp;nbsp;Operation-Location&amp;nbsp;value is a URL that contains the Result ID to be used in the next step.&lt;/P&gt;
&lt;P&gt;Operation location - &lt;BR /&gt;&lt;EM&gt;https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;: &lt;STRONG&gt;The Get Analyze Layout Result Operation –&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once you have the operation location call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeLayoutResult" target="_blank" rel="noopener"&gt;Get Analyze Layout Result&lt;/A&gt;&amp;nbsp;operation. This operation takes as input the Result ID that was created by the Analyze Layout operation.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;https://{endpoint}/formrecognizer/v2.1-preview.2/layout/analyzeResults/{resultId}&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The output of the Get Analyze Layout Results will provide a JSON output with the extracted table – rows, columns, row span, col span, bounding box and more.&lt;/P&gt;
&lt;P&gt;For example:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Table Blog 4.jpg" style="width: 382px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/246200i7B5CF06774E40D85/image-size/large?v=v2&amp;amp;px=999" role="button" title="Table Blog 4.jpg" alt="Table Blog 4.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started &lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;To get started create a Form Recognizer resource in the &lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt; and try out your tables in the &lt;A href="https://fott-preview.azurewebsites.net/layout-analyze" target="_blank" rel="noopener"&gt;Form Recognizer Sample Tool&lt;/A&gt;. You can also use the &lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=ga%2Cv2-0&amp;amp;pivots=programming-language-rest-api" target="_blank" rel="noopener"&gt;Form Recognizer client library or REST API.&lt;/A&gt;&lt;/SPAN&gt;&lt;BR /&gt;Note tables output is included in all parts of the Form Recognizer service – prebuilt, layout and custom in the JSON output pageResults section.&lt;/LI&gt;
&lt;LI&gt;For additional questions please reach out to us at&amp;nbsp;&lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 14 Jan 2021 19:03:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011</guid>
      <dc:creator>NetaH</dc:creator>
      <dc:date>2021-01-14T19:03:46Z</dc:date>
    </item>
    <item>
      <title>Bot Framework Composer 1.3 is now available!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/bot-framework-composer-1-3-is-now-available/ba-p/1996923</link>
      <description>&lt;P&gt;This week, as the year draws to a close, we are excited to announce that Bot Framework Composer 1.3 is now available to download. Composer has come a long way since we made the product GA (generally available) at the Microsoft Build conference earlier this year and this is our biggest release yet, adding many significant capabilities and making building sophisticated bots and virtual assistants even easier!&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;New features to improve the developer experience and workflow&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;For developers who are working with Bot Framework Skills today, you will know that developing multiple bots locally that work together can sometimes be a challenge, especially when it comes to setting up debugging. In Composer 1.3, we have now added a multi-bot authoring and management experience to transform this scenario, adding the capability to create, manage and test multiple bots within a single project. With a single click, you can now start all local bots for debugging, enabling you to test your root (parent) bot, connected to one or more skills with no additional manual configuration needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another significant enhancement is for the provisioning feature, which previously required developers to leave Composer and run a PowerShell script, copying back a resulting configuration into Composer. Now though, the provisioning process has been overhauled and users can now login to Azure, provision required resources and subsequently publish bots, all within the Composer environment!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="provisioning.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241314iB969EFD59180320E/image-size/large?v=v2&amp;amp;px=999" role="button" title="provisioning.PNG" alt="provisioning.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Additionally, we have implemented a new settings experience, providing an improved interface, removing the need to manually edit the underlying JSON for common settings, whilst retaining the ability to make changes or add additional configuration manually if you need to.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Localization&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to the existing capability for developers to localize their bots, multilingual support has now been added to the Composer UI! You can now choose from a long list of available languages within the Application Settings pane to change the language displayed within Composer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="languages.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241315i7E753590FE83785A/image-size/large?v=v2&amp;amp;px=999" role="button" title="languages.png" alt="languages.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Preview features&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of version 1.3, you can now choose to enable one or more preview features by choosing preview feature flags within the Composer settings page. These features are designed to give you early access and a chance to try what we are working on right now for future Composer releases. The following preview feature flags are now available.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;U&gt;Form Dialogs&lt;/U&gt; – Automatically generate a sophisticated dialog by simply providing the properties that you would like customers to provide as part of the conversation, with Composer then generating the appropriate dialog, language understanding (to enable dis-ambiguation and interruption scenarios) and bot responses (.lg files) assets.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Orchestrator&lt;/U&gt; – A new top-level recognizer which can help to arbitrate (dispatch) between multiple LUIS and QnA Maker models to ensure accurate routing of user requests to the appropriate language model or skill.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Package Manager&lt;/U&gt; – Developers can now discover and install packages from NuGet / NPM that contain re-usable assets, including dialogs, custom actions and .LG (language generation) files, that can be utilized by their bots. Once installed, assets contained within a package become available for use within a bot. Moving forward, we will provide guidance for how you can create and publish your own packages (including to internal feeds if desired), as well as making available a number of packages covering common scenarios that will ship with Composer.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="package-manager.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/241316i73318D9B10036745/image-size/large?v=v2&amp;amp;px=999" role="button" title="package-manager.PNG" alt="package-manager.PNG" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;U&gt;Conversational core template&lt;/U&gt; – Built on the new package capabilities, surfaced via the preview of the Package Manager, we are developing a new component model for bot development using re-usable building blocks (packages). With this preview, users can create a bot using the new conversational core template which consists of a configurable runtime that can be extended with packages or importing additional skills.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Help us to improve Composer!&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;Within this release we have enabled the ability for users of Composer to opt in to sending usage information to us, to allow us to better understand how Composer is used. As we gather this telemetry, we can use it as an additional signal to help us prioritize our efforts in future releases and ensure we are focusing on the right features. You can help us by opting into providing usage data via the Data Collection section of the Composer settings page.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, a huge thank you to all of our users for your support and feedback during 2020 - we are excited to bring more significant updates to you as we move into 2021. Happy Holidays to everyone from the entire Conversational AI team!&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2020 10:42:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/bot-framework-composer-1-3-is-now-available/ba-p/1996923</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-12-17T10:42:45Z</dc:date>
    </item>
    <item>
      <title>Azure Neural Text-to-Speech updates: 51 new voices added to the portfolio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/ba-p/1988418</link>
      <description>&lt;P&gt;&lt;EM&gt;This post was co-authored with Qinying Liao, Sheng Zhao, Gang Wang, Yueying Liu&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&amp;nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;close to human-parity&lt;/A&gt;. &amp;nbsp;Since its launch, we have seen it widely adopted in a variety of scenarios by many Azure customers, from voice assistants to audio content creation. More and more customers are asking for richer and more diverse choices of synthetic voices for different use cases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Azure Neural TTS has added 51 new voices for a total of 129 neural voices across 54 languages/locales. With this release, we provide at least one male and one female voice for customers to choose in each language/locale. &amp;nbsp;In total, Azure TTS now enables developers to reach millions more people with more than 200 voices available in standard and neural TTS.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;What's new&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS has now been extended to support 51 new &lt;EM&gt;voices, &lt;/EM&gt;which will bring to you the capability to have both male and female voices in each language for your apps&lt;EM&gt;.&lt;/EM&gt; You can hear samples of the voices below, or try them with your own text in &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features?ocid=AID3027325" target="_blank" rel="noopener"&gt;our demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;46 new voices are generally available&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In total 46 new voices are released across the 49 locales that are generally available in the Azure data centers/regions that support neural TTS (see the full list of &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;Azure regions&lt;/A&gt; here).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 80%;" width="80%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="180" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="690" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="880" class="lia-align-center" style="width: 250px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample audio&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ar-EG&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Arabic (Egypt)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ShakirNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;البركان هو أكثر ما في الطبيعــة إثارة للرهبة&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ar-EG%20Shakir.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ar-SA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Arabic (Saudi Arabia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HamedNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;الناس مَعادن، تصدأ بالملل، وتتمدد بالأمل، وتنكمش بالألم&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ar-SA%20Hamed.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;bg-BG&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Bulgarian (Bulgaria)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;BorislavNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Шофьорът задължително трябва да вземе експерт за второ мнение, за да провери дали всички системи на автомобила работят нормално.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/bg-BG%20Borislav.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ca-ES&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Catalan (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;EnricNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Les activitats docents tenen lloc al campus del Poblenou.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ca-ES%20Enric.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ca-ES&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Catalan (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JoanaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;L'artista està considerat com el pintor de les multituds.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ca-ES%20Joana.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;cs-CZ&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Czech (Czech)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AntoninNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Opravdový zasvěcenec ví, že nejmocnějším tajemstvím je to, které nemá žádný obsah.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/cs-CZ%20Antonin.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;da-DK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Danish (Denmark)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JeppeNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;61 procent af de kandidatstuderende er kvinder.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/da-DK%20Jeppe.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;de-AT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;German (Austria)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Das ist das letzte lange Pfingstwochenende für Schülerinnen und Schüler.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-AT%20Jonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;de-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;German (Switzerland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;JanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Eine Person, die sich bei Brandausbruch im oberen Stock aufgehalten hat, hat sich noch rechtzeitig in Sicherheit bringen können.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-CH%20Jan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;el-GR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Greek (Greece)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NestorasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Συγκλονιστικές εξελίξεις και ανατροπές στα επόμενα επεισόδια .&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/el-GR%20Nestoras.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-CA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (Canada)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;LiamNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;He had held the position since 2010.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/en-CA%20Liam.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ConnorNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Life is short, think before you talk.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/en-IE%20Connor.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;en-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;English (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;PrabhatNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Sometimes you can see snow on the mountains.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/en-IN%20Prabhat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fi-FI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Finnish (Finland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HarriNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Yhtiö kertoi loppuvuoden tuloksestaan ennakkotietoja.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fi-FI%20Harri.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fi-FI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Finnish (Finland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SelmaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Hevoset ovat uljaita ja nopeita eläimiä.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fi-FI%20Selma.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;fr-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;French (Switzerland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FabriceNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;La Suisse comptera 5,6 millions (12%) de personnes actives en 2050.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/fr-CH%20Fabrice.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;he-IL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hebrew (Israel)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AvriNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P class="lia-align-right"&gt;הוא אמר שהמספרים מדאיגים בשל עצמם, אבל בכל הישיבות שלנו המסקנה היא שזה סימפטום למשהו רחב יותר.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/he-IL%20AvriNeural.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hi-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hindi (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MadhurNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;सिद्धार्थ ने भी शहनाज के साथ इस इवेंट की फोटो शेयर की है।&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/hi-IN%20Madhur.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hr-HR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Croatian (Croatia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SreckoNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Video je pregledan gotovo 70 tisuća puta, a neki od obožavatelja su mu u komentarima pisali kako ih je motivirao.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/hr-HR%20Srecko.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;hu-HU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Hungarian (Hungary)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;TamasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;A lakóhelyem nagyon komfortos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/hu-HU%20Tamas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;id-ID&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Indonesian (Indonesia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;GadisNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Inflasi ringan terjadi apabila kenaikan harga berada di bawah angka 10% setahun.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/id-ID%20Gadis.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ms-MY&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Malay (Malaysia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;OsmanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Setiap individu perlu memakai topeng muka ketika berada di luar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ms-MY%20Osman.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nb-NO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Norwegian (Bokmål, Norway)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FinnNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Jansson forteller at den svenske øya tar imot rundt 8000 besøkende fra Norge årlig.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nb-NO%20Finn.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nb-NO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Norwegian (Bokmål, Norway)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;PernilleNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;For en fantastisk forestilling!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nb-NO%20Pernille.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nl-NL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Dutch (Netherlands)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;FennaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;De&amp;nbsp;afstand tussen Rotterdam en Breda&amp;nbsp;is ongeveer 45 km.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nl-NL%20Fenna.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;nl-NL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Dutch (Netherlands)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MaartenNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zij heeft haar studie al een tijdje geleden afgerond.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/nl-NL%20Maarten.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pl-PL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Polish (Poland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AgnieszkaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;To już nie będzie to samo, będzie drożej.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pl-PL%20Agnieszka.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pl-PL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Polish (Poland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MarekNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Na wszelki wypadek sprawdź, czy coś cię jednak nie zaskoczy.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pl-PL%20Marek.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pt-PT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Portuguese (Portugal)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;DuarteNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Para a aprovação do exame, tenho de ter pelo menos 80% das respostas corretas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pt-PT%20Duarte.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;pt-PT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Portuguese (Portugal)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;RaquelNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;A minha mãe ensinou-me que devo ter respeito por todos, mas principalmente pelos mais velhos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/pt-PT%20Raquel.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ro-RO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Romanian (Romania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;EmilNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Actul normativ se axează pe instituirea de măsuri active, 41,5 % din salariul de bază la revenirea din șomaj tehnic.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ro-RO%20Emil.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ru-RU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;DmitryNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Ранее посольство требовало от агентства опровержения статьи о количестве больничных коек в России.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ru-RU%20Dmitry.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ru-RU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SvetlanaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Изменений в организме людей, попробовавших еду без приправ, не произошло.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/ru-RU%20Svetlana.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sk-SK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Slovak (Slovakia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;LukasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zápis 45 % je v skutočnosti iba skratka pre zlomok.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sk-SK%20Lukas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sl-SI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Slovenian (Slovenia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;RokNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Zloraba bonov in dvigovanje cen turističnih storitev je nesprejemljivo ravnanje.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sl-SI%20Rok.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sv-SE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Swedish (Sweden)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MattiasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Båda lagen bjöd på riktigt bra hockey och skapade flera riktigt bra målchanser.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sv-SE%20Mattias.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;sv-SE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Swedish (Sweden)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;SofieNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Det fanns ingen trafik runt torget.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/sv-SE%20Sofie.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;ta-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Tamil (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;ValluvarNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;எவ்வளவு அருமையான பாடல் அது!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/ta-IN%20Valluvar.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;te-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Telugu (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;MohanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;అబ్బ, ఎంత పెద్ద భవనమో!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/Indic%20locales/te-IN%20Mohan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;th-TH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Thai (Thailand)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NiwatNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;ธุรกิจขายอาหารเป็นธุรกิจที่ได้รับความนิยมมากที่สุด&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/th-TH%20Niwat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;tr-TR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Turkish (Turkey)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;AhmetNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Sosyal mesafeye büyük ölçüde riayet eden çocuklar, başta mahalle parkları olmak üzere sahiller ve oyun parklarında enerji attı.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/tr-TR%20Ahmet.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;vi-VN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Vietnamese (Vietnam)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;NamMinhNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;Nhiệt độ hiện tại ở thành phố Hồ Chí Minh là 38 độ C.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/vi-VN%20NamMinh.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-HK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Cantonese, Traditional)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HiuMaanNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;抗疫舉措成為安全重啟經濟的重要一環。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-HK%20HiuMaan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-HK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Cantonese, Traditional)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;WanLungNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;隨着疫情緩和，愈來愈多人回到辦公室上班，但是很多人仍想留在家中工作（work from home）。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-HK%20WanLung.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-TW&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Taiwanese Mandarin)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;HsiaoChenNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;賭博的勝率應該不到50%。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-TW%20HsiaoChen.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="74"&gt;
&lt;P&gt;zh-TW&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="125"&gt;
&lt;P&gt;Chinese (Taiwanese Mandarin)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="85"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="111"&gt;
&lt;P&gt;YunJheNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="228"&gt;
&lt;P&gt;台北車站大廳能不能坐，連日引發正反意見。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/zh-TW%20YunJhe.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;5 new voices are in public preview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have also added 5 male voices in the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;5 low-resource languages &lt;/A&gt;that have been supported since November. These voices are available in public preview in&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;three Azure regions&lt;/A&gt;: EastUS, SouthEastAsia and WestEurope.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the samples below:&lt;/P&gt;
&lt;TABLE style="width: auto;" width="auto"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="57" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;V&lt;/STRONG&gt;&lt;STRONG&gt;oice &lt;/STRONG&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216" class="lia-align-center" style="width: 250px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample audio&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;KertNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Ametlikku meetodit sellise pettuse avastamiseks ei olegi olemas.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/et-EE%20Kert.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;ColmNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Ritheadh próiseas comhairliúcháin faoin scéal sa bhfómhar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/ga-IE%20Colm.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;LeonasNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Aišku, anksčiau ar vėliau paaiškės tos priežastys.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lt-LT%20Leonas.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;NilsNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Aizvadīto gadu uzņēmums noslēdzis ar 6,3 miljonu eiro zaudējumiem.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lv-LV%20Nils.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="57"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="159"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;JosephNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Anki tfajjel tal-primarja jaf li l-popolazzjoni tikber fejn hemm il-prosperità.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/mt-MT%20Joseph.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this release, we now support a total of 129 neural voices across 54 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech" target="_blank" rel="noopener"&gt;Language support - Speech service - Azure Cognitive Services | Microsoft Docs&lt;/A&gt; for the full language and voice list.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="map for blog (2).png" style="width: 899px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/240968iE8FCBB4F388639C2/image-size/large?v=v2&amp;amp;px=999" role="button" title="map for blog (2).png" alt="map for blog (2).png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Continuous voice quality improvement&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In general, Neural TTS can convert text to lifelike speech, however there are nuances that can always be improved. For example, we have customers who have requested the ability to support a scenario where Katja, our de-DE neural voice, can pronounce English words in the context of a German sentence. This was valuable feedback, and we anticipate a similar need across languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For German, we observed that our users prefer the voice to handle an English word/phrase as close as the native English pronunciation. To enable a voice model to speak English as a second language, it is normally required that we collect the speech data of the same speaker speaking English besides his/her native language. This is a big challenge as we do not have sufficient multi-language speech data from our German voice talents. By leveraging cross-lingual capability of &lt;SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener"&gt;UNI-TTS&lt;/A&gt;&lt;/SPAN&gt;, we are able to generate more English pronunciation data with the transferred voice from our German voice talent. Such data is used to improve the quality of the English word/phrase pronunciations for the German Katja voice, so Katja can pronounce English words in a more natural way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;CMOS&lt;/A&gt; metric is used to measure the improvement of the English word pronunciation for Katja. Below table shows that the updated model is significantly better in pronouncing English words in the context of a German sentence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: auto;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="160px" class="lia-align-center"&gt;&lt;STRONG&gt;Script&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;&lt;STRONG&gt;Old&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160px" style="width: 200px;"&gt;Star Wars - Das Erwachen der Macht&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00026-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00026-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="160px"&gt;&lt;SPAN&gt;Three&lt;/SPAN&gt; Billboards outside Ebbing, Missouri.&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00037-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80px"&gt;
&lt;P&gt;&lt;AUDIO style="font-family: inherit;" controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release/de-DE%20samples/00037-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;This improvement has now been released to the Azure Neural TTS service for Katja. Moving forward, we’ll extend this capability to support more languages.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Tell us your experience!&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you’re building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let us know how you are using or plan to use Neural TTS voices in this &lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRx5-v_jX54tFo-eNTe-69oBUMDU3SDlVUEFCNkQyNjNXM0tOS0NQNkM2VS4u" target="_blank" rel="noopener"&gt;form&lt;/A&gt;. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing your experience and developing more compelling services together with you for the developers around the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;Add voice to your app in 15 minutes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/?ocid=AID3027325" target="_blank" rel="noopener"&gt;Explore the available voices in this demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk#optional-change-the-language-and-bot-voice" target="_blank" rel="noopener"&gt;Build a voice-enabled bot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;Deploy Azure TTS voices on prem with Speech Containers&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Build your custom voice&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Dec 2020 06:46:34 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-text-to-speech-updates-51-new-voices-added-to-the/ba-p/1988418</guid>
      <dc:creator>GarfieldHe</dc:creator>
      <dc:date>2020-12-16T06:46:34Z</dc:date>
    </item>
    <item>
      <title>Meta-data driven key-value pairs extraction with Azure Form Recognizer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/meta-data-driven-key-value-pairs-extraction-with-azure-form/ba-p/1942595</link>
      <description>&lt;P&gt;Most organizations are now aware of how valuable the forms (pdf, images, videos…) they keep in their closets are. They are looking for best practices and most cost-effective ways and tools to digitize those assets. &amp;nbsp;By extracting the data from those forms and combining it with existing operational systems and data warehouses, they can build powerful AI and ML models to get insights from it to deliver value to their customers and business users.&lt;/P&gt;
&lt;P&gt;With the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview" target="_blank" rel="noopener"&gt;Form Recognizer Cognitive Service&lt;/A&gt;, we help organizations to harness their data, automate processes (invoice payments, tax processing …), save money and time and get better accuracy.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 1-Typical form.png" style="width: 921px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236730i90EB2354726E1A31/image-size/large?v=v2&amp;amp;px=999" role="button" title="Figure 1-Typical form.png" alt="Figure 1-Typical form.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 1:Typical form&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In my first blog about the automated form processing, I described how you can extract key-value pairs from your forms in real-time using the Azure Form Recognizer cognitive service. We successfully implemented that solution for many customers.&lt;/P&gt;
&lt;P&gt;Often, after a successful PoC or MVP, our customers realize that, not only they need this real time solution but, they also have a huge backlog of forms they would like to ingest into their relational, NoSQL databases or data lake, in a batch fashion. They have different types of forms and they don’t want to build a model for each type. They are also looking for easy and quick way to ingest the new type of forms.&lt;/P&gt;
&lt;P&gt;In this blog, we’ll describe how to dynamically train a form recognizer model to extract the key-value pairs of different type of forms and at scale using Azure services. We’ll also share a github repository where you can download the code and implement the solution we describe in this post.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The backlog of forms maybe in your on-premises environment or in a (s)FTP server. We assume that you were able to upload them into an Azure Data Lake Store Gen 2, using &lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal" target="_blank" rel="noopener"&gt;Azure Data Factory&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows" target="_blank" rel="noopener"&gt;Storage Explorer&lt;/A&gt; or &lt;A href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs" target="_blank" rel="noopener"&gt;AzCopy&lt;/A&gt;. Therefore, the solution we’ll describe here will focus on the data ingestion from the data lake to the (No)SQL database.&lt;/P&gt;
&lt;P&gt;Our product team published a great tutorial on how to &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;Train a Form Recognizer model and extract form data by using the REST API with Python&lt;/A&gt;. The solution described here demonstrates the approach for one model and one type of forms and is ideal for real-time form processing.&lt;/P&gt;
&lt;P&gt;The value-add of the post is to show how to automatically train a model with new and different type of forms using a meta-data driven approach, in batch mode.&lt;/P&gt;
&lt;P&gt;Below is the high-level architecture.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 2 - High Level Architecture.png" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236732i2CE7C4C2187D161A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Figure 2 - High Level Architecture.png" alt="Figure 2 - High Level Architecture.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 2:&amp;nbsp; High Level Architecture&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Azure services required to implement this solution&lt;/H2&gt;
&lt;P&gt;To implement this solution, you will need to create the below services:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Form Recognizer resource:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;Form Recognizer resource&amp;nbsp;to setup and configure the form recognizer cognitive service, get the API key and endpoint URI.&lt;/P&gt;
&lt;H3&gt;Azure SQL single database:&lt;/H3&gt;
&lt;P&gt;We will create a meta-data table in Azure SQL Database. This table will contain the non-sensitive data required by the Form Recognizer Rest API. The idea is, whenever there is a new type of form, we just insert a new record in this table and trigger the training and scoring pipeline.&lt;BR /&gt;The required attributes of this table are:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;form_description: This field is not required as part of the training of the model the inference. It just to provide a description of the type of forms we are training the model for (example client A forms, Hotel B forms,...)&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;training_container_name: This is the storage account container name where we store the training dataset. It can be the same as scoring_container_name&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;training_blob_root_folder: The folder in the storage account where we’ll store the files for the training of the model.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;scoring_container_name: This is the storage account container name where we store the files we want to extract the key value pairs from.&amp;nbsp; It can be the same as the training_container_name&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;scoring_input_blob_folder: The folder in the storage account where we’ll store the files to extract key-value pair from.&lt;/LI&gt;
&lt;LI&gt;model_id: The identify of model we want to retrain. For the first run, the value must be set to -1 to create a new custom model to train. The training notebook will return the newly created model id to the data factory and, using a stored procedure activity, we’ll update the meta data table with in the Azure SQL database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Whenever you had a new form type, you need to reset the model id to -1 and retrain the model.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;file_type: The supported types are&amp;nbsp;application/pdf,&amp;nbsp;image/jpeg,&amp;nbsp;image/png,&amp;nbsp;image/tif.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;form_batch_group_id : Over time, you might have multiple forms type you train against different models. The form_batch_group_id will allow you to specify all the form types that have been training using a specific model.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Azure Key Vault:&lt;/H3&gt;
&lt;P&gt;For security reasons, we don’t want to store certain sensitive information in the parametrization table in the Azure SQL database. We store those parameters in Azure Key Vault secrets.&lt;/P&gt;
&lt;P&gt;Below are the parameters we store in the key vault:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CognitiveServiceEndpoint: The endpoint of the form recognizer cognitive service. This value will be stored in Azure Key Vault for security reasons.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;CognitiveServiceSubscriptionKey: The access key of the cognitive service. This value will be stored in Azure Key Vault for security reasons. The below screenshot shows how to get the key and endpoint of the cognitive service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Figure 3 - Cognitive Service Keys and Endpoint.png" style="width: 444px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236735iB3D7BA397AC96780/image-dimensions/444x210?v=v2" width="444" height="210" role="button" title="Figure 3 - Cognitive Service Keys and Endpoint.png" alt="Figure 3 - Cognitive Service Keys and Endpoint.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 3: Cognitive Service Keys and Endpoint&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;StorageAccountName: The storage account where the training dataset and forms we want to extract the key value pairs from are stored. The two storage accounts can be different. The training dataset must be in the same container for all form types. They can be in different folders.&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;StorageAccountSasKey : the shared access signature of the storage account&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The below screen shows the key vault after you create all the secrets&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Figure 4 - Key Vault Secrets.png" style="width: 543px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/236738i47D666152D7294EC/image-dimensions/543x242?v=v2" width="543" height="242" role="button" title="Figure 4 - Key Vault Secrets.png" alt="Figure 4 - Key Vault Secrets.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 4 : Key Vault Secrets&lt;/P&gt;
&lt;H3&gt;Azure Data Factory:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;To orchestrate the training and scoring of the model. Using a look up activity, we’ll retrieve the parameters in the Azure SQL Database and orchestrate the training and scoring of the model using Databricks notebooks. All the sensitive parameters stored in Key vault will be retrieve in the notebooks.&lt;/P&gt;
&lt;H3&gt;Azure Data Lake Gen 2:&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;To store the training dataset and the forms we want to extract the key-values pairs from. The training and the scoring datasets can be in different containers but, as mentioned above, the training dataset must be in the same container for all form types.&lt;/P&gt;
&lt;H3&gt;Azure Databricks:&lt;/H3&gt;
&lt;P&gt;To implement the python script to train and score the model. Note that we could have used Azure functions.&lt;/P&gt;
&lt;H3&gt;Azure Key Vault:&lt;/H3&gt;
&lt;P&gt;To store the sensitive parameters required by the Form Recognizer Rest API.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The code to implement this solution is available in the following &lt;A href="https://github.com/issaghaba/Meta-data-driven-key-value-pairs-extraction-with-Azure-Form-Recognizer" target="_blank" rel="noopener"&gt;GitHub repository&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Get started with deploying Form Recognizer –&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Custom Model&lt;/STRONG&gt;&amp;nbsp;– extract text, tables and key value pairs&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model and extract form data by using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model with labels using the sample labeling tool&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Form Recognizer Sample Labeling Tool&amp;nbsp;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Try it out:&amp;nbsp;&lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;https://fott.azurewebsites.net/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Open Source project:&amp;nbsp;&lt;A href="https://github.com/microsoft/OCR-Form-Tools" target="_blank" rel="noopener"&gt;https://github.com/microsoft/OCR-Form-Tools&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Prebuilt receipts -&amp;nbsp;&lt;/STRONG&gt;extract data from USA sales receipts&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts" target="_blank" rel="noopener"&gt;Quickstart: Extract receipt data using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Layout -&amp;nbsp;&lt;/STRONG&gt;extract text and table structure (row and column numbers) from your documents&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-layout" target="_blank" rel="noopener"&gt;Quickstart: Extract layout data using the REST API&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;See&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/whats-new" target="_blank" rel="noopener"&gt;What’s New&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Nov 2020 23:07:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/meta-data-driven-key-value-pairs-extraction-with-azure-form/ba-p/1942595</guid>
      <dc:creator>IssaghaBa</dc:creator>
      <dc:date>2020-11-30T23:07:44Z</dc:date>
    </item>
    <item>
      <title>Introducing Asynchronous APIs for Text Analytics and Text Analytics for Health</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-asynchronous-apis-for-text-analytics-and-text/ba-p/1922422</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored with Sara Kandil&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are announcing a preview of new asynchronous (batch) APIs for Text Analytics and Text Analytics for health, which enable developers to apply Natural Language Processing (NLP) to even more scenarios so they can identify key phrases, entities and even personally identifiable information (PII).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Asynchronous Analyze API for&amp;nbsp;Text Analytics&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Text Analytics is a generally available&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Cognitive Service&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;that lets you discover insights in text using Natural Language Processing (NLP). The service helps you identify key phrases and entities (people, place,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;organization&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;event, date among others&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;), recognize text that contains personal information (PII) and analyze sentiment (positive, neutral, or negative).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To date, customers have been using Text Analytics by making synchronous calls to the service’s REST API, client library SDK, or by using containers to run Text Analytics in their own environment.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Today, we are introducing a new preview Analyze operation for users to analyze larger documents asynchronously&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;combining multiple Text Analytics features in one call&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. This gives customers the flexibility to analyze more information, at once, when their applications don’t need a synchronous response. The new asynchronous Analyze operation for Text Analytics supports individual documents of up to 125k characters, and up to 25 documents in a request.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The Analyze operation preview supports key phrase extraction, named entity recognition and PII recognition and is available in 5 Azure regions (West US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, East US2, West Europe, North Europe,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Central US). Support for the rest of the Text Analytics capabilities and additional regions is coming soon.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Asynchronous Analyze API for&amp;nbsp;Text Analytics for health&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;also&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;introducing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a new&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;asynchronous hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;API for&amp;nbsp;&lt;/SPAN&gt;Text Analytics for&amp;nbsp;health.&amp;nbsp;&lt;SPAN data-contrast="none"&gt;As a refresher,&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;early this year (July), we&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/introducing-text-analytics-for-health/ba-p/1505152" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;announced&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a preview of Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth, a capability for the healthcare industry, trained to extract insights from medical data.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;W&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ith Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth, users can:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;Detect words and phrases mentioned in unstructured text as entities that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;associated with semantic types in the healthcare and biomedical domain – such as diagnosis, medication name, symptom/sign, and more.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;Link entities to medical ontologies and domain-specific coding systems (for example, the&amp;nbsp;&lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://www.nlm.nih.gov/research/umls/sourcereleasedocs/index.html" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Unified Medical Language System&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;), and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;extract&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;meaningful connections between concepts mentioned in text (for example, finding the relationship between a medication name and the dosage associated with it.)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Detect negation&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;the different entities mentioned in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;text.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_0-1606250369423.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/235820iE5F39FFECB8D9B44/image-size/large?v=v2&amp;amp;px=999" role="button" title="wmendoza_0-1606250369423.png" alt="Example of Text Analytics for health at work." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Example of Text Analytics for health at work.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Previously, Text Analytics for&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;ealth was only available for use via containers. Th&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;is&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;new API gives users the option to use the hosted service and avoid the heavy lifting of hosting containers unless they need to.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ealth&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;operation&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;supports document sizes up to 5k characters and up to 10 documents in a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;single&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;request. It is available for use in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;West US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, East US&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;2, Central US,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;North&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;West Europe&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;regions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In summary, Text Analytics is now more accessible with more ways to use the capabilities depending on your scenario. You can:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="none"&gt;Call the synchronous endpoints to use the Text Analytics features.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Call the async&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hronous&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;Analyze API to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;process&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;larger documents&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;multiple&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;Text Analytics&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;features in a single call.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Call the&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hosted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;async&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;hronous&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;Text Analytics for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;h&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;ealth API if&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;your&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;dataset that is being analyzed&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;has&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;clinical and bio&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;medical&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;documents.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Use Text Analytics containers to host the endpoint in your own environments that meets your privacy and security requirements.&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:360,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;new Text Analytics asynchronous&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;APIs are available to use in Preview today. Please refer to our&amp;nbsp;&lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://aka.ms/TAforHealth-Docs" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;to learn&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;more a&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;nd&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;get started&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;&amp;nbsp;with these new APIs&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=6vX3Us1TOw8&amp;amp;list=PLlrxD0HtieHi0mwteKBOfEeOYf0LJU4O1&amp;amp;index=1" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/6vX3Us1TOw8/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Nov 2020 21:35:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-asynchronous-apis-for-text-analytics-and-text/ba-p/1922422</guid>
      <dc:creator>AshlyYeo</dc:creator>
      <dc:date>2020-11-24T21:35:16Z</dc:date>
    </item>
    <item>
      <title>Neural Text-to-Speech previews five new languages with innovative models in the low-resource setting</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604</link>
      <description>&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post is co-authored with Xianghao Tang, Lihui Wang, Jun-Wei Gan, Gang Wang,&amp;nbsp; Garfield He, Xu Tan and Sheng Zhao&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text-to-Speech&lt;/A&gt; (Neural TTS),&amp;nbsp;part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. For example, the &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt;, &lt;A href="https://customers.microsoft.com/en-us/story/789698-progressive-insurance-cognitive-services-insurance" target="_blank" rel="noopener"&gt;Progressive&lt;/A&gt; and &lt;A href="https://aka.ms/MotorolaSolutions" target="_blank" rel="noopener"&gt;Motorola Solutions&lt;/A&gt; are using Azure Neural TTS to develop conversational interfaces for their voice assistants in English speaking locales. &lt;A href="https://customers.microsoft.com/en-us/story/821105-swisscom-telecommunications-azure-cognitive-services" target="_blank" rel="noopener"&gt;Swisscom&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;Poste Italiane&lt;/A&gt; are adopting neural voices in French, German and Italian to interact with their customers in the European market. &lt;A href="https://customers.microsoft.com/en-us/story/cheetah-mobile-consumer-goods-azure-cognitive-services-china" target="_blank" rel="noopener"&gt;Hongdandan&lt;/A&gt;, a non-profit organization, is using neural voices in Chinese to make their online books audible for the blind people in China.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544" target="_blank" rel="noopener"&gt;September 2020&lt;/A&gt;, we extended Neural TTS to support 49 languages/locales with 68 voices. At the same time, we continue to receive customer requests for more voice choices and more language support globally.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to announce that Azure Neural TTS has extended its global support to five new languages: Maltese, Lithuanian, Estonian, Irish and Latvian, in public preview. At the same time, Neural TTS Container is generally available for customers who want to deploy neural voice models on-prem for specific security requirements. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Neural TTS previews 5 new languages&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Five new voices and languages are introduced to the Neural TTS portfolio. They are: Grace in Maltese (Malta), Ona in Lithuanian (Lithuania), Anu in Estonian (Estonia), Orla in Irish (Ireland) and Everita in Latvian (Latvia). These voices are available in public preview in &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions#standard-and-neural-voices" target="_blank" rel="noopener"&gt;three Azure regions&lt;/A&gt;: EastUS, SouthEastAsia and WestEurope.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear samples of these voices, or try them with your own text in&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;our demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;Locale&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Language&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;Voice name&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Audio sample&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“mt-MT-GraceNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/mt-MT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“lt-LT-OnaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lt-LT.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“et-EE-AnuNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/et-EE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“ga-IE-OrlaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Tá an scoil sa mbaile ar oscailt arís inniu.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ga-IE.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="135"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137"&gt;
&lt;P&gt;“lv-LV-EveritaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="293"&gt;
&lt;P&gt;Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lv-LV.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, Azure TTS service now supports 54 languages/locales with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;78 neural voices&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;77 standard voices&lt;/A&gt; available. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Behind the scenes: 10X faster voice building with the low resource setting.&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The creation of a TTS voice model normally requires a large volume of training data, especially for extending to a new language, where sophisticated language-specific engineering is required. In this section, we introduce “&lt;STRONG&gt;LR-UNI-TTS&lt;/STRONG&gt;”, a new Neural TTS production pipeline to create TTS languages where training data is limited, i.e., ‘low-resourced’. With this innovation, we are able to improve the Neural TTS locale development with 10x agility and support the five new languages quickly. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;High resource vs. low resource&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Traditionally, it can easily take more than 10 months to extend TTS service to support a new language due to the extensive language-specific engineering required. This includes collecting tens of hours of language-specific training data, and creating hand-crafted components like text analysis etc.. In many cases, one major challenge for supporting a new language is that such large volume of data is unavailable or hard to find, causing a language ‘low-resourced’ for TTS model building. &amp;nbsp;To handle the challenge, Microsoft researchers have proposed an innovative approach, called &lt;A href="https://arxiv.org/pdf/2008.03687.pdf" target="_blank" rel="noopener"&gt;LRSpeech&lt;/A&gt;, to handle the extremely low-resourced TTS development. It has been proved that LRSpeech has the capability to build good quality TTS in the low-resource setting, using multilingual pre-training, knowledge distillation, and importantly the dual transformation between text-to-speech (TTS) and speech recognition (SR).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How LR-UNI-TTS works&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Built on top of LRSpeech and the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;multi-lingual multi-speaker&lt;/A&gt; transformer TTS model (called UNI-TTS), we have designed the offline model training pipeline and the online inference pipeline for the low-resource TTS.&amp;nbsp; Three key innovations contribute to the significant agility gains with this approach.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First, by leveraging the parallel speech data (the pairing speech audios and the transcript) collected during the speech recognition development, the LR-UNI-TTS training pipeline greatly reduces the data requirements for refining the base model in the new language. Previously, the high-quality multi-speaker parallel data has been critical in extending TTS to support a new language. The TTS speech data is more difficult to collect as it requires the data to be clean, the speaker carefully selected, and the recording process well controlled to ensure the high audio quality.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Second, by applying the cross-lingual speaker transfer technology with the &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;UNI-TTS&lt;/A&gt; pipeline, we are able to leverage the existing high-quality data in a different language to produce a new voice in the target language. &amp;nbsp;This saves the effort to find a new professional speaker for the new languages. Traditionally, the high-quality parallel speech data in the target language is required, which easily takes months for the voice design, voice talent selection, and recording.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Lastly, the LR-UNI-TTS approach uses characters instead of phonemes as the input feature to the models, while the high-resource TTS pipeline is usually composed of a multi-step text analysis module that turns text into phonemes, costing long time to build.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below figure describes the offline training pipeline for the low-resource TTS voice model.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_10" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="offline-training.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/234719i763495F6744FA7BC/image-size/large?v=v2&amp;amp;px=999" role="button" title="offline-training.png" alt="Figure 1. The offline training pipeline for the low-resource TTS voice model." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 1. The offline training pipeline for the low-resource TTS voice model.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In specific, at the offline training stage, we have leveraged a few hundred hours of the speech recognition data to further refine the UNI-TTS model. It can help the base model to learn more prosody and pronunciation patterns for the new locales. The speech recognition data is usually collected in daily environments using PC or mobile devices, unlike the TTS data which is normally collected in the professional recording studios. Although the SR data can be much lower-quality than the TTS data, we have found LR-UNI-TTS can benefit from such data effectively.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this approach, the high-quality parallel data in the new language which is usually required for the TTS voice training becomes optional. If such high-quality parallel data is available, it can be used as the target voice in the new language. &amp;nbsp;If no high-quality parallel data is available, we can also choose a suitable speaker from an existing but different language and transfer it into the new language through the cross-lingual speaker transfer-learning capability of UNI-TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the below chart, we describe the flow of the runtime.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="online-inference.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/234722i6DAF707A8F8BE2BE/image-size/large?v=v2&amp;amp;px=999" role="button" title="online-inference.png" alt="Figure 2: The online inference pipeline for the low-resource TTS voice model." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 2: The online inference pipeline for the low-resource TTS voice model.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_11" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;At the runtime, a lightweight text analysis is designed to preprocess the text input with sentence separation and text normalization. Compared to the text analysis component of the high-resource language pipelines, this module is greatly simplified. For instance, it does not include the pronunciation lexicon or letter-to-sound rules which are used in high-resource languages. The normalized text characters are generated by the lightweight text analysis component. During this process, we also leverage the text normalization rules from the speech recognition development, which saves the overall cost a lot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The other components are similar to the high-resource language pipelines. For example, the neural acoustic model uses the &lt;A href="https://arxiv.org/pdf/1905.09263.pdf" target="_blank" rel="noopener"&gt;FastSpeech&lt;/A&gt; model to convert the character input into mel-spectrogram.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, the neural vocoder &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener"&gt;HiFiNet&lt;/A&gt; is used to convert the mel-spectrogram into audio output.&lt;/P&gt;
&lt;P&gt;Overall, using LR-UNI-TTS, &amp;nbsp;a TTS model in a new language can be built in about one month, which is 10x faster than the traditional approaches.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the next section, we share the quality measurement results for the voices built with LR-UNI-TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Quality assessments&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Similar to other TTS voices, the quality of the low-resource voices created in the new languages are measured using the Mean Opinion Score (MOS) tests and intelligibility tests. MOS is a widely recognized scoring method for speech naturalness evaluation. With MOS studies, participants rate speech characteristics such as sound quality, pronunciation, speaking rate, and articulation on a 5-point scale, and an average score is calculated for the report. Intelligibility test is a method to measure how intelligible a TTS voice is.&amp;nbsp; With intelligibility tests, judges are asked to listen to a set of TTS samples and mark out the unintelligible words to them.&amp;nbsp; Intelligibility rate is calculated using the percentage of the correctly intelligible words among the total number of words tested (i.e., the number of intelligible words/the total number of words tested * 100%).&amp;nbsp; Normally a usable TTS engine needs to reach a score of &amp;gt; 98% for intelligibility.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below table summarizes the MOS score and the intelligibility score of the five new languages created using LR-UNI-TTS&amp;nbsp; .&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;&lt;STRONG&gt;Language (Region)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;&lt;STRONG&gt;Average MOS&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;&lt;STRONG&gt;Intelligibility&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;mt-MT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Maltese (Malta)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;3.59*&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;98.40%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;lt-LT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Lithuanian (Lithuania)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.35&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.25%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;et-EE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Estonian (Estonia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.52&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;98.73%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;ga-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Irish (Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.62&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.43%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="73"&gt;
&lt;P&gt;lv-LV&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="142"&gt;
&lt;P&gt;Latvian (Latvia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="92"&gt;
&lt;P&gt;4.51&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="91"&gt;
&lt;P&gt;99.13%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;FONT size="2"&gt;* Note: MOS scores are subjective and not directly comparable across languages. The MOS of the mt-MT voice is relatively lower but reasonable in this case considering that the human recordings used as the training data for this voice also gots a lower MOS.&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As shown in the table, the voices created with the low resources available are highly intelligible and have achieved high or reasonable MOS scores among the native speakers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It’s worth pointing out that due to the nature of the lightweight text analysis module for the runtime, the phoneme-based SSML tuning capabilities are not supported for the low-resource voice models, for example, &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#use-phonemes-to-improve-pronunciation" target="_blank" rel="noopener"&gt;the ‘phoneme’ and the ‘lexicon’ elements&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Coming next: extending Neural TTS to even more locales&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;LR-UNI-TTS &amp;nbsp;has paved the way for us to extend Neural TTS to more languages for the global users more quickly. Most excitingly, LR-UNI-TTS can potentially be applied to preserve the languages that are disappearing in the world today, as pointed out in the guiding principles of &lt;A href="https://www.microsoft.com/en-us/research/blog/a-holistic-representation-toward-integrative-ai/" target="_blank" rel="noopener"&gt;XYZ-code&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the five new languages released in public preview, we welcome user feedback as we continue to improve the voice quality. &amp;nbsp;&lt;SPAN&gt;We are also interested to partner with passionate people and organizations to create TTS for more languages.&amp;nbsp; Contact us (&lt;/SPAN&gt;mstts[at]microsoft.com) for more details&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;What’s more: Neural TTS Container GA&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Together with the preview of these five new languages, we are happy to share that the Neural TTS Container is now GA. With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements.&amp;nbsp; Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;how to install Neural TTS Container &lt;/A&gt;&amp;nbsp;and visit the&amp;nbsp;&lt;A href="https://aka.ms/cscontainers-faq" target="_blank" rel="noopener"&gt;Frequently Asked Questions&lt;/A&gt;&amp;nbsp;on Azure Cognitive Services Containers.&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Get started&amp;nbsp;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers, supporting more flexible deployment. Azure Text-to-Speech service provides more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;150 voices in over 50 languages&lt;/A&gt; for developers all over the world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Check out our &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 19 Nov 2020 16:30:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-11-19T16:30:01Z</dc:date>
    </item>
    <item>
      <title>How to operationalize more than 100 AI models in as little as 12 weeks using Azure Databricks</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-operationalize-more-than-100-ai-models-in-as-little-as-12/ba-p/1892062</link>
      <description>&lt;P&gt;Organizations are leveraging artificial intelligence (AI) and machine learning (ML) to derive insight and value from their data and to improve the accuracy of forecasts and predictions.&amp;nbsp;&lt;FONT style="background-color: #ffffff;"&gt;In rapidly changing environments, &lt;/FONT&gt;&lt;A href="https://dbricks.co/3kCItuU" target="_blank" rel="noopener"&gt; Azure Databricks&lt;/A&gt; enables organizations to spot new trends, respond to unexpected challenges and predict new opportunities.&amp;nbsp;&lt;FONT style="background-color: #ffffff;"&gt;Data teams are using Delta Lake to &lt;A href="https://dbricks.co/3pB3VUP" target="_blank" rel="noopener"&gt;accelerate ETL pipelines&lt;/A&gt; and MLflow to establish a &lt;A href="https://dbricks.co/32RbkG2" target="_blank" rel="noopener"&gt;consistent ML lifecycle&lt;/A&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Solving the complexity of ML frameworks, libraries and packages&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;Customers frequently struggle to manage all of the libraries and frameworks for machine learning on a single laptop or workstation. There are so many libraries and frameworks to keep in sync (H2O, PyTorch, scikit-learn, MLlib). In addition, you often need to bring in other Python packages, such as Pandas, Matplotlib, numpy and many others. Mixing and matching versions and dependencies between these libraries can be incredibly challenging.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Databricks-runtime-for-ML.png" style="width: 512px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233822iE865E26B69A99775/image-size/large?v=v2&amp;amp;px=999" role="button" title="Databricks-runtime-for-ML.png" alt="Databricks-runtime-for-ML.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;FONT style="background-color: #ffffff;"&gt;Figure 1.&amp;nbsp;Databricks Runtime for ML enables ready-to-use clusters with built-in ML Frameworks&lt;/FONT&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;With Azure Databricks, these frameworks and libraries are packaged so that you can select the versions you need as a single dropdown. We call this the Databricks Runtime. Within this runtime, we also have a specialized runtime for machine learning which we call the &lt;A href="https://dbricks.co/36W25Wr" target="_blank" rel="noopener"&gt;Databricks Runtime for Machine Learning&lt;/A&gt; (ML Runtime). All these packages are pre-configured and installed so you don’t have to worry about how to combine them all together. Azure Databricks updates these every 6-8 weeks, so you can simply choose a version and get started right away.&lt;BR /&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Establishing a consistent ML lifecycle with MLflow&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;The goal of machine learning is to optimize a metric such as forecast accuracy. Machine learning algorithms are run on training data to produce models. These models can be used to make predictions as new data arrive. The quality of each model depends on the &lt;A href="https://dbricks.co/2UzmHO8" target="_blank" rel="noopener"&gt;input data and tuning parameters&lt;/A&gt;. Creating an accurate model is an &lt;A href="https://dbricks.co/2Kfq5vS" target="_blank" rel="noopener"&gt;iterative process&lt;/A&gt; of experiments with various libraries, algorithms, data sets and models. The MLflow open source project started about two years ago to manage each phase of the model management lifecycle, from input through hyperparameter tuning. &lt;A href="https://dbricks.co/2K5UQmK" target="_blank" rel="noopener"&gt;MLflow recently joined the Linux Foundation&lt;/A&gt;. Community support has been tremendous, with 250 contributors, including large companies. In June, MLflow surpassed 2.5 million monthly downloads.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MLflow-unifies-data-scientists-and-engineers.png" style="width: 512px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233825i7021AD73B8DCEE19/image-size/large?v=v2&amp;amp;px=999" role="button" title="MLflow-unifies-data-scientists-and-engineers.png" alt="MLflow-unifies-data-scientists-and-engineers.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;EM&gt;&lt;FONT style="background-color: #ffffff;"&gt;Diagram: MLflow unifies data scientists and data engineers&lt;/FONT&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Ease of infrastructure management&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;Data scientists want to focus on their models, not infrastructure. You don’t have to manage dependencies and versions. It scales to meet your needs. As your data science team begins to process bigger data sets, you don’t have to do capacity planning or requisition/acquire more hardware. With Azure Databricks, it’s easy to onboard new team members and grant them access to the data, tools, frameworks, libraries and clusters they need.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Alignment Healthcare&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;A href="https://dbricks.co/36K6FXB" target="_blank" rel="noopener"&gt;Alignment Healthcare&lt;/A&gt;, a rapidly growing Medicare insurance provider, serves one of the most at-risk groups of the COVID-19 crisis—seniors. While many health plans rely on outdated information and siloed data systems, Alignment processes a wide variety and large volume of near real-time data into a unified architecture to build a revolutionary digital patient ID and comprehensive patient profile by leveraging Azure Databricks. This architecture powers more than 100 AI models designed to effectively manage the health of large populations, engage consumers, and identify vulnerable individuals needing personalized attention—with a goal of improving members’ well-being and saving lives.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;&lt;FONT style="background-color: #ffffff;"&gt;Building your first machine learning model with Azure Databricks&lt;/FONT&gt;&lt;/H2&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;To help you get a feel for Azure Databricks, follow the code samples and videos in &lt;A href="https://dbricks.co/38Sjz8v" target="_blank" rel="noopener"&gt;this blog post&lt;/A&gt; to build a simple model using sample data in Azure Databricks. Learn how to by attending an &lt;A href="https://dbricks.co/2K93eBX" target="_blank" rel="noopener"&gt;Azure Databricks event&lt;/A&gt;, watch how you can &lt;A href="https://dbricks.co/3nxfzP4" target="_blank" rel="noopener"&gt;Turbocharge your business with Machine Learning&lt;/A&gt;, leverage this &lt;A href="https://dbricks.co/3nuBRAJ" target="_blank" rel="noopener"&gt;free Azure Databricks ML training module on MS Learn&lt;/A&gt; and join us at our next &lt;A href="https://dbricks.co/3kB1Qog" target="_blank" rel="noopener"&gt;Azure Databricks Office Hours&lt;/A&gt;.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Tue, 17 Nov 2020 14:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-operationalize-more-than-100-ai-models-in-as-little-as-12/ba-p/1892062</guid>
      <dc:creator>ClintonWFord-Databricks</dc:creator>
      <dc:date>2020-11-17T14:00:00Z</dc:date>
    </item>
    <item>
      <title>November 2020 – Conversational AI update</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/november-2020-conversational-ai-update/ba-p/1892528</link>
      <description>&lt;P&gt;We are excited to announce the November release of the Bot Framework SDK and Composer, driving the Microsoft Conversational AI platform forward and building on the announcements we made in September at Microsoft Ignite. Our November update sees new updates to the Bot Framework SDK and Bot Framework Composer, adding new capabilities for developers and improving integration with our key partners, including &lt;A href="http://powerva.microsoft.com/" target="_self"&gt;Power Virtual Agents&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/healthbot/" target="_self"&gt;HealthBot&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework v4.11&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/what-is-new?view=azure-bot-service-4.0" target="_self"&gt;Version 4.11 of the Bot Framework SDK&lt;/A&gt;, including new releases for .NET, JavaScript, Python and Java (preview 7), along with updates to our tooling, including the CLI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Following our quality focused 4.10 release, we continue to push on this area, including improvements to the commonly used typing and transcript logging middleware behavior and associated error handling.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For developers building solutions for Microsoft Teams, new support for meetings has been added, including the Meeting Participant API and meeting specific notifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We continue to reduce developer friction for Skill development, adding the ability to test Skills locally, using the Bot Framework Emulator, without requiring an App Id and password. Additional scenarios, such as interruption support when calling a Skill and the ability to update or delete activities from a Skill have also been added.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Skills support has now also been added to &lt;A href="https://docs.microsoft.com/en-us/healthbot/" target="_self"&gt;HealthBot&lt;/A&gt;, a cloud platform for virtual health bots and assistants built on Bot Framework, with solutions now able consume, or for themselves to be consumed as a Bot Framework Skill.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’re also undertaking significant investments in automated testing in this area, with the opportunity for you to review and provide feedback on the current specifications for &lt;A href="https://github.com/microsoft/botframework-sdk/blob/main/specs/testing/skills/SkillsFunctionalTesting.md" target="_self"&gt;Functional Testing&lt;/A&gt; and the &lt;A href="https://github.com/microsoft/BotFramework-FunctionalTests/blob/main/specs/TransciptTestRunner.md" target="_self"&gt;Test Runner&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Further improvements to our documentation include expanding content across Adaptive Dialogs, Skills, overall architecture topics, as well as adding &lt;A href="https://docs.microsoft.com/en-us/java/api/?term=microsoft.bot.builder" target="_self"&gt;reference documentation for the Java SDK preview&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework Composer v1.2&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A &lt;A href="https://docs.microsoft.com/en-us/composer/what-is-new" target="_self"&gt;new release of Composer (v1.2)&lt;/A&gt; is now available. This release deepens integration with Power Virtual Agents (PVA), part of the Power Platform, with a new &lt;A href="https://powervirtualagents.microsoft.com/en-us/blog/power-virtual-agents-integration-with-bot-framework-composer-is-available-in-public-preview/" target="_self"&gt;public preview of PVA integration with Bot Framework Composer&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Blog_PVA_Composer_HD.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233926iA4B430BED5ED451D/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog_PVA_Composer_HD.gif" alt="Blog_PVA_Composer_HD.gif" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;Users of the no-code PVA platform were already able to extend their solutions by consuming Bot Framework Skills. Now, PVA solutions can be opened in Bot Framework Composer, using a deep-link from the PVA portal, extending them with more sophisticated capabilities and enabling the collaboration between business users and developers on the same project.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://powervirtualagents.microsoft.com/en-us/blog/power-virtual-agents-integration-with-bot-framework-composer-is-available-in-public-preview/" target="_self"&gt;Try the new Power Virtual Agents integration with Bot Framework Composer today!&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When ready, Composer developers can publish directly from Composer, using a pre-configured publishing profile, back into the PVA portal, with new PVA Topics added using Composer then shown alongside existing Topics and immediately ready for testing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An upcoming release of Composer, expected in December, will add improved provisioning and publishing support and enhanced QnA Maker knowledgebase integration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of the December release, users will also have the option to enable new preview capabilities through the addition of feature flags.&amp;nbsp; The first preview features planned include &lt;A href="https://aka.ms/bf-orchestrator" target="_self"&gt;Orchestrator&lt;/A&gt; integration, the new intent detection and arbitration (dispatch) technology that runs locally within your bot, along with Form Dialogs, enabling the rapid generation of intelligent slot-filling dialogs, including complex capabilities such as slot disambiguation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Nightly builds of Composer are available (enabled via the Composer settings page) which allow you to try the latest updates as soon as they are available.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Nov 2020 21:21:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/november-2020-conversational-ai-update/ba-p/1892528</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-11-16T21:21:35Z</dc:date>
    </item>
    <item>
      <title>Introducing QnA Maker managed: now in public preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575</link>
      <description>&lt;P&gt;QnA Maker is an Azure Cognitive Service that allows you to create a conversational layer over your data- in minutes. Today, we are announcing a new version of QnA Maker which advances several core capabilities like better relevance and precise answering, by introducing state-of-art deep learning technologies.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604338472497.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/233098iCD0584EEFD441E53/image-size/large?v=v2&amp;amp;px=999" role="button" title="nerajput_1-1604338472497.png" alt="Illustrative representation of QnA Maker functionality." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Illustrative representation of QnA Maker functionality.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Overview of new QnA Maker managed capabilities&lt;/H1&gt;
&lt;P&gt;Summary of new features introduced:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deep learnt ranker with enhanced relevance of results across all &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support" target="_blank" rel="noopener"&gt;supported languages&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Precise phrase/short answer extraction from answer passages.&lt;/LI&gt;
&lt;LI&gt;Simplified resource management by reducing the number of resources deployed.&lt;/LI&gt;
&lt;LI&gt;E2E region support for Authoring + Prediction.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Detailed description of the new features is further down in this article. Learn how to migrate to the new QnA Maker managed (Preview) knowledge base &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/tutorials/migrate-knowledge-base" target="_blank" rel="noopener"&gt;here.&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;QnA Maker managed (Preview) Architecture.&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;As per the architecture of QnA Maker managed (Preview), there will be only two resources: QnA Maker service for authoring and computation and Azure Cognitive Search for storage and L1 ranking. This has been done with an aim of simplifying the resource creation and management process. Now, customers need to manage only 2 resources instead of 5 different resources. &amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;QnA Maker managed (Preview) also allows the user to do language setting specific to Knowledge Base.&lt;/LI&gt;
&lt;LI&gt;Computation has been moved out of the user subscription, so there is no dependency on the customers for scaling and availability management. This allowed us to use SOTA deep learnt model for L2 ranker which enhances the L2 ranker horizontally across all the languages, so now we support all the 50+ languages with better and enhanced precision.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;QnA Maker service will be available in multiple regions to give customers’ the flexibility to keep their end-to-end service in one region.&lt;/LI&gt;
&lt;LI&gt;For inference logs and telemetry, the latest version will be using Azure Monitoring instead of App insights. To keep the experience seamless and easy to adopt all the APIs has been kept backward compatible. There is almost zero change in the management portal experience.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604339027855.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230873i294601F3DC0BA357/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_1-1604339027855.png" alt="nerajput_1-1604339027855.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;New features of QnA Maker managed (Preview)&lt;/H1&gt;
&lt;P&gt;This section talks about all the distinguishing features of QnA maker managed in detail.&lt;/P&gt;
&lt;H2&gt;Simplified Create Blade&lt;/H2&gt;
&lt;P&gt;Onboarding on QnA Maker managed (Preview), and resource creation has been kept quite simple. Now, you will see a checkbox with &lt;STRONG&gt;Managed&lt;/STRONG&gt;, as shown below. As soon as you select the checkbox, the form will be updated with the required resources.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_1-1604339332630.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230881iD29DB949BC8E7638/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_1-1604339332630.png" alt="nerajput_1-1604339332630.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Precise Answering&lt;/H2&gt;
&lt;P&gt;Machine Reading Comprehension based answer span detection feature is most beneficial for the scenarios where the customers have big passages present as answer in their Knowledge Base. Currently, they put good amount of manual efforts in curating small/precise answers and ingest them in the Knowledge base.&lt;/P&gt;
&lt;P&gt;The new features give them flexibility to either choose the precise answer or the answer passage, customers can take this decision based on the confidence score of the precise short answer and answer passage. Here are some examples to show how short answers can be useful:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nerajput_0-1604339230181.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/230880iD3FFAB4B253833CA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nerajput_0-1604339230181.png" alt="nerajput_0-1604339230181.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Deep Learnt ranker&lt;/H2&gt;
&lt;P&gt;The new L2 ranker is based on &lt;A href="https://www.microsoft.com/en-us/research/blog/microsoft-turing-universal-language-representation-model-t-ulrv2-tops-xtreme-leaderboard/" target="_self"&gt;Turing multilingual language model (T-ULRv2)&lt;/A&gt;, a deep learning-based transformer model, which improves the precision of the service for all the languages.&amp;nbsp;For any user query, the new L2 ranker model understands the semantics of the user query better and gives better aligned results. This model is not language specific and is targeted to improve the overall precision of all languages horizontally. Here are some examples to analyze the difference between the results of current service and QnA Maker managed (Preview) service:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="671"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Query&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Current GA results &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;QnA Maker managed (Preview) results &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp; Improvements in Preview&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;can someone ring me&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;I can tell you all about Wi-Fi calling, including the devices that support Wi-Fi calling and where you can get more information yourself. Feel free to ask me a question and I'll do what I can to answer it&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;Yes, you can make and receive calls using Wi-Fi calling. Pretty nifty, right?&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;The new L2 ranker can understand the relevance between “ring me” and “make and receive calls” and is returning more relevant result unlike the current GA, which has returned a generic answer.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;can’t connect to mobile data&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="179"&gt;
&lt;P&gt;You'll be connected to Wi-Fi, so it'll only use your minutes and text allowances.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="188"&gt;
&lt;P&gt;If you don't have mobile signal, it's no problem. With Three inTouch Wi-Fi Calling, you can call and text whenever you're on Wi-Fi in the UK, even without mobile signal.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="214"&gt;
&lt;P&gt;The new L2 ranker is again able to understand the query better as its able to understand that mobile data is somewhere connected to mobile signals and hence giving better results based on the data present in the Knowledge Base than the current GA model.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;E2E region support&lt;/H2&gt;
&lt;P&gt;With QnA Maker managed (Preview) our management service is no more limited to west-US region. We are offering end to end region support for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;South Central US&lt;/LI&gt;
&lt;LI&gt;North Europe&lt;/LI&gt;
&lt;LI&gt;Australia East.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Other hero regions will be added when we go GA.&lt;/P&gt;
&lt;H2&gt;Knowledge Base specific language setting&lt;/H2&gt;
&lt;P&gt;Now, customers can create Knowledge bases with different language setting within a service. This feature is beneficial for users who have multi-language scenarios and need to power the service for more than one language. In this case, there will be a test index specific to every Knowledge Base, so that the customer can verify how the service is performing specific to every language.&lt;/P&gt;
&lt;P&gt;You can configure this setting only with the first Knowledge base of the service, once set the user will not be allowed to update the setting.&lt;/P&gt;
&lt;H2&gt;Pricing&lt;/H2&gt;
&lt;P&gt;Public preview of QnA Maker managed (Preview) will be free in all the regions (You only pay for the Azure Cognitive Search SKU). The standard pricing will be applicable when the service goes to GA by mid-2021.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcognitive-services%2Fqnamaker%2Fhow-to%2Fset-up-qnamaker-service-azure&amp;amp;data=04%7C01%7CNeha.Rajput%40microsoft.com%7C1c1572c23454483ee7d308d87c580ec8%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637396064955343068%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=X4h6Z2rWsCewb17gHyPPHZDYqMSX3bYlXkD7pZDP9%2Bo%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Create your QnA Maker managed (Preview) service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcognitive-services%2Fqnamaker%2Ftutorials%2Fmigrate-knowledge-base&amp;amp;data=04%7C01%7CNeha.Rajput%40microsoft.com%7C1c1572c23454483ee7d308d87c580ec8%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637396064955343068%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;amp;sdata=NbpwblEPLnDhTGRD9tarqPdHvH7FmWISwkF8hfvsPFA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Migrate your knowledge base to the new Preview.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=h1wwjBpSeZ4" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/h1wwjBpSeZ4/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Nov 2020 05:55:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575</guid>
      <dc:creator>nerajput</dc:creator>
      <dc:date>2020-11-19T05:55:03Z</dc:date>
    </item>
    <item>
      <title>Azure speaks your language: the 3 immediate benefits for your organization</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-speaks-your-language-the-3-immediate-benefits-for-your/ba-p/1853544</link>
      <description>&lt;P class="hp hq fx hr b hs ht hu hv hw hx hy hz ia ib ic id ie if ig ih ii cx dv" data-selectable-paragraph=""&gt;The last several years brought exciting innovations in the field of Artificial Intelligence, especia&lt;SPAN&gt;l&lt;/SPAN&gt;ly when it comes to advancements in speech and language processing. Processing speech and making text and audio information searchable enables a diverse set of innovative applications, including helping researchers in searching for related papers, or building information graphs for predicting the best new drug candidates, or uncovering issues with products and services in near real time. For region like Central and Eastern Europe, which includes 30+ countries, most speaking their own language, support for local languages is a critical condition for implementing innovation. That’s why the recent (September 2020) Azure Speech services update has opened a whole new area of opportunity for our region.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ht hu hv hw hx hy hz ia ib ic id ie if ig ih ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;With updated language support,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;most of the EU languages are now supported in Azure Speech services&lt;/STRONG&gt;. For region which I am covering in my current role, it means that we now have support for all of our CEE EU languages&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;(Polish, Bulgarian, Czech, Greek, Croatian, Hungarian, Romanian, Slovak, Slovenian, Estonian, Lithuanian, Latvian, Maltese)&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;Russian&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;in Azure Speech and Translator services. Additionally, our speech generation models have also been updated, now leveraging the Neural TTS - a powerful speech synthesis capability, which enables to convert text to lifelike speech which is close to human-parity. Below you will find&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG class="hr ck"&gt;3 benefits, how this might help you advance your products and services today&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="health.jpeg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231462iF4E4CD2A64FAE41D/image-size/large?v=v2&amp;amp;px=999" role="button" title="health.jpeg" alt="health.jpeg" /&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Automatic generation of medical summary from spoken conversations between doctors and patients&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;First&lt;/STRONG&gt;, analyzing speech data or generating speech enables you to extract insights from audio or video information, which otherwise would be unreachable for analytical systems. This might include data like customer support conversations or employee speech in videos or transcribing speech for field employees or doctors. Communicating with your customers with natural-sounding generated speech in your own language is another area of innovation, which enables scenarios from voice announcements to supporting people with visual impairments to building voice assistants. Is information the new currency? If you answer “yes” to this — why then would you have terabytes of currency sitting without you getting use of it? Now you can turn it into tangible cash-flow.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv lia-indent-padding-left-30px" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv lia-indent-padding-left-30px" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;&lt;EM class="jd"&gt;Azure Speech&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM class="jd"&gt;services are a sub-set of pre-built (but customizable) APIs for working with Speech. This includes transcribing spoken language into text for further analysis (Speech-to-Text) and generating naturally sounding speech form text input (Text-to-Speech). Azure Translator is another piece in the puzzle, which has also received major update for the languages, now translating text between 70+ languages.&lt;/EM&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;Second&lt;/STRONG&gt;, there are new scenarios enabled now by these pre-built AI models. Do you have that innovative idea for analysing customer conversations or augmenting your service with spoken messages in your local language? Often, these ideas were not realized due to the associated challenges like finding the right skilled people within your organization and investing into a project with unknown development cycle and returns. Now it is possible to build a realistic prototype app quickly to extract insights from your speech data, by calling the service through the API — in days, if not hours.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="CLO18_headset_003.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231466i2ABB926C77241685/image-size/large?v=v2&amp;amp;px=999" role="button" title="CLO18_headset_003.jpg" alt="CLO18_headset_003.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;FONT size="2"&gt;Analysing customer support conversations brings insights from priceless data, which is untapped without applying Speech processing&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&lt;STRONG class="hr ck"&gt;Third&lt;/STRONG&gt;, this is one of those cloud services, which may work without sending your data to the cloud! Many of Azure Cognitive Services today may be deployed right within your own data center as containers. This means, that none of the actual data will be sent to the cloud, as even processing will happen locally. In this case, only billing information will be exchanged with Azure.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;Interested enough to give it a try? If you are interested in learning more, you may&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A class="bo js" href="https://azure.microsoft.com/en-us/overview/sales-number/?wt.mc_id=AID3025025_QSG_BLOG_488906" target="_blank" rel="noopener nofollow"&gt;request detailed information or virtual session on Azure Cognitive Services&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from our sales representatives (please specify whether you are looking for the session on Azure Cognitive services, or details of your specific projects where Speech services may be used). To read more or test Azure Speech services capabilities in your language, please refer to our&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A class="bo js" href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-services/?wt.mc_id=AID3025025_QSG_BLOG_488907" target="_blank" rel="noopener nofollow"&gt;Azure Speech Services Documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="hp hq fx hr b hs ij ht hu hv ik hw hx hy il hz ia ib im ic id ie in if ig ii cx dv" data-selectable-paragraph=""&gt;Looking forward to the exciting results you will achieve in your business with the updated Azure Speech Services!&lt;/P&gt;</description>
      <pubDate>Wed, 04 Nov 2020 15:48:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-speaks-your-language-the-3-immediate-benefits-for-your/ba-p/1853544</guid>
      <dc:creator>dturchyn</dc:creator>
      <dc:date>2020-11-04T15:48:28Z</dc:date>
    </item>
    <item>
      <title>Azure Neural TTS upgraded with HiFiNet, achieving higher audio fidelity and faster synthesis speed</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860</link>
      <description>&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post was co-authored with Jinzhu Li and Sheng Zhao&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&amp;nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;close to human-parity&lt;/A&gt;. Since its launch, we have seen it widely adopted in a variety of scenarios by many Azure customers, from voice assistants like the customer service bot like &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt; and &lt;A href="https://cloudwars.co/covid-19/microsoft-ceo-satya-nadella-10-thoughts-on-the-post-covid-19-world/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Poste Italiane&lt;/SPAN&gt;&lt;/A&gt;, to audio content creation scenarios like &lt;A href="https://youtu.be/m-3-D7S0piw?t=668" target="_blank" rel="noopener"&gt;Duolingo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Voice quality, which includes the accuracy of pronunciation, the naturalness of prosody such as intonation and stress patterns, and &lt;EM&gt;the fidelity of audio&lt;/EM&gt;, is the key reason that customers are migrating from the traditional TTS voices to neural voices. Today we are glad to share that we have upgraded our Neural TTS voices with a new-generation vocoder, called &lt;EM&gt;HiFiNet&lt;/EM&gt;, which results much higher audio fidelity while significantly improving the synthesis speed. This is particularly beneficial to customers whose scenario relies on hi-fi audios or long interactions, including video dubbing, audio books, or online education materials. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What’s new?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our recent updates on Azure Neural TTS voices include a major upgrading of the vocoder. The voice fidelity has been improved significantly and audio quality defects such as glitches and small noises are largely reduced. Our tests show that this new vocoder generates audios without hearable quality loss from the recordings of training data (more details are introduced later). In addition, it can synthesize speech much faster than our previous version of the product. All these benefits are achieved through a new-generation neural vocoder, called &lt;EM&gt;HiFiNet&lt;/EM&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What is a vocoder and why does it matter?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Vocoder is a major component in speech synthesis, or text-to-speech. It turns an intermediate form of the audio, which is called acoustic feature, into audible waveform. Neural vocoder is a specific vocoder design which uses deep learning networks and is a critical module of Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft &lt;SPAN&gt;Azure &lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;Neural TTS &lt;/A&gt;&amp;nbsp;consists of three major components in the engine: Text Analyzer, Neural Acoustic Model, and Neural Vocoder. To generate natural synthetic speech from text, first, text is input into &lt;EM&gt;Text Analyzer&lt;/EM&gt;, which provides output in the form of phoneme sequence. A phoneme is a basic unit of sound that distinguishes one word from another in a particular language. Sequence of phonemes defines the pronunciations of the words provided in the text. Then the phoneme sequence goes into the &lt;EM&gt;Neural Acoustic Model&lt;/EM&gt; to predict acoustic features, which defines speech signals, such as speaking style, speed, intonations, and stress patterns, etc. Finally, the &lt;EM&gt;Neural Vocoder&lt;/EM&gt; converts the acoustic features into audible waves so the synthetic speech is generated.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The vocoder is critical to the final audio quality. In specific, it directly impacts the fidelity of a wave, including clearness, timbre, etc. Let’s hear the difference of the audio quality with samples generated using different neural vocoders based on the same acoustic features (recommended to &lt;STRONG&gt;listen with a high-quality headset&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;Vocoder versions&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;
&lt;P&gt;2018 vocoder for real-time synthesis&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;
&lt;P&gt;2019 vocoder for real-time synthesis&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180px"&gt;
&lt;P&gt;2020 vocoder for real-time synthesis (HiFiNet)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="136px"&gt;
&lt;P&gt;&lt;EM&gt;“&lt;/EM&gt;&lt;EM&gt;Top cinematographers weigh in on filmmaking in the age of streaming.”&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="168px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2018-vocoder.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="168px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2019-vocoder-new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="180px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/2020-vocoder-new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With each vocoder update, the speech generated sounds clearer, voice less muffled and noises reduced. &amp;nbsp;In the next section, we introduce how a &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder is trained during the creation of a neural voice model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;How does HiFiNet work?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Azure TTS system, neural voice models are trained using human voice recordings as training data with deep learning networks. As part of the training, a vocoder is built with the goal to generate high quality audio output close to the original recordings from the training data. In the meantime, it needs to run fast enough to produce at least 24,000 samples per seconds, i.e. with a sampling rate of 24khz, which is the default sampling rate of Azure Neural TTS voice models.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Leveraging the state-of-art research on vocoders, we designed the training pipeline for &lt;EM&gt;HiFiNet&lt;/EM&gt;, the new-generation Neural TTS vocoder, and applied it to create neural voice models in Azure Neural TTS. This pipeline is built with one simple goal: produce machine-generated audio waves (synthetic speech) that is indistinguishable from its original waves (human recordings) in a high speed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below chart describes how the &lt;EM&gt;HiFiNet&lt;/EM&gt; training pipeline works. With this pipeline, two key networks are trained: &amp;nbsp;A &lt;EM&gt;Generator&lt;/EM&gt; which is used to create audio (‘Generated Wave’), and a &lt;EM&gt;Discriminator &lt;/EM&gt;which is used to identify the gap of the created audio from its training data (‘Real Wave’). The goal of the training is to make the &lt;EM&gt;Generator&lt;/EM&gt; generate waves that the &lt;EM&gt;Discriminator&lt;/EM&gt; can’t distinguish from the original real recordings.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Vocoder-Training.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231194iEF2762BE612324B0/image-size/large?v=v2&amp;amp;px=999" role="button" title="Vocoder-Training.png" alt="Training pipeline of the HiFiNet Vocoder" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Training pipeline of the HiFiNet Vocoder&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;First, the training pipeline uses the original human recording as input and extract the acoustic features. Then, the acoustic features are fed into the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; module which generates waves, so we get two sets of waves: the original recordings as real waves, and the generated waves as fake waves. Next, the two sets of waves are fed into the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; network to distinguish which are the real waves and which are the generated fake waves. This output from the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; is used as feedback to help the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; and &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; to learn better. As this training loop continues, the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; becomes smarter to create indistinguishable fake waves, while the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; gets smarter in making the right judgements. Finally, when the training reaches a point where &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Discriminator&lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt; can’t distinguish the waves generated by the &lt;/SPAN&gt;&lt;EM style="font-family: inherit;"&gt;Generator &lt;/EM&gt;&lt;SPAN style="font-family: inherit;"&gt;from real waves, the vocoder is successfully trained. This vocoder is capable of producing audio outputs without noticeable quality loss compared to the original human recordings.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the next section we describe the performance of &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What are the benefits?&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet significantly improves audio quality.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To understand the benefit of &lt;EM&gt;HiFiNet&lt;/EM&gt;, we conducted a number of tests in many aspects which yielded positive results. Our tests show that the &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder significantly improves the audio quality of the Neural TTS voice output, compared to our previous version of the product.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;CMOS (Comparative Mean Opinion Score) is a well accepted method in the speech industry for comparing the voice quality of two TTS systems. A CMOS test is similar to an A/B testing, where participants listen to different pairs of audio samples generated by two systems and provide their subjective opinions on how A compares to B. Normally in one test, we recruit 30-60 anonymous testers with qualified language expertise to evaluate around 50 pairs of audio samples side by side. The result is reported as &lt;EM&gt;CMOS gap&lt;/EM&gt;, which measures the average of the difference in the opinion score between the two systems. In the cases where the absolute value of a CMOS gap is &amp;lt;0.1, we claim system A and B are on par. When the absolute value of a CMOS gap is &amp;gt;=0.1, then one system is reported better than the other. If the absolute value of a CMOS gap is &amp;gt;=0.2, we say one system is significantly better than the other. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have done hundreds of CMOS tests of &lt;EM&gt;HiFiNet&lt;/EM&gt; compared to our last version vocoder, on 68 neural voices across 49 languages/locales. Our results show that &lt;EM&gt;HiFiNet&lt;/EM&gt; is notably better than the previous production vocoder in Azure Neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In general, the audio quality, especially the fidelity is obviously improved. On average, across all languages, the &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder achieves a CMOS gain higher than 0.2 compared to the previous vocoder, which means the improvement is hearable for users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In particular, &lt;EM&gt;HiFiNet&lt;/EM&gt; also has better robustness than the previous version of vocoder. Audio defects are largely reduced in the generated waves with &lt;EM&gt;HiFiNet&lt;/EM&gt;. Our tests show that with the previous production vocoder, in 100 test samples, our testers can hear about 10 defects like beep, click sound, fidelity loss. Although most of them are not obvious, it can still be annoying if it keeps happening in a long audio or multi-round voice interactions. Now, these defects are no longer reported with the &lt;EM&gt;HiFiNet&lt;/EM&gt; audios, under the same test procedure with the same test sets.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these advantages, we have updated the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;Neural TTS voices&lt;/A&gt; on Azure Cognitive Services with the new vocoder. Listen to the samples below to hear the difference. &amp;nbsp;Or test the new voices using your own text with our &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;online demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;
&lt;P&gt;Previous vocoder&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="138"&gt;
&lt;P&gt;HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174" scope="col" style="width: 200px;"&gt;
&lt;P&gt;HiFiNet CMOS gain&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder24k-Cheerful-00018.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN24k-Cheerful-00018.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.122 (Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;German&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00022-oldVocoder24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00022-HiFiNet24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.193 (Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Chinese (Mandarin, Simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder24k-News-00005.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN24k_News-00005.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.348 (Obviously Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Japanese&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.oldVocoder-LongSentence-00032.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiGAN-LongSentence-00032.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.465 (Obviously Better)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet reaches human-parity audio fidelity.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, we have conducted tests to compare the human recording audio quality and the computer-generated audio quality with &lt;EM&gt;HiFiNet&lt;/EM&gt;. To make the comparison more accurate and more focused on the vocoder itself, we use the acoustic features extracted directly from human recordings instead of the TTS-predicted acoustic features so the acoustic differences are controlled and only the vocoder is evaluated in CMOS tests. Participants are asked to give their scores for different pairs of the generated waves and human recordings. Our result shows the CMOS gap of the audios produced by &lt;EM&gt;HiFiNet&lt;/EM&gt; compared to human recordings is -0.05, which means the difference is hardly hearable and the audio quality is on par.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear how close the &lt;EM&gt;HiFiNet&lt;/EM&gt; audio fidelity is to the human recordings with the samples below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;
&lt;P&gt;Human recording&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="138"&gt;
&lt;P&gt;HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174" scope="col" style="width: 200px;"&gt;
&lt;P&gt;HiFiNet CMOS gap&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.recording-GeneralSentence-0000000365.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiNet-GeneralSentence-0000000365.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;+0.045 (on par)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="90"&gt;
&lt;P&gt;Chinese (Mandarin, Simplified)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="144"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.recording-GeneralSentence-0001000011.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="138"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.HiFiNet-GeneralSentence-0001000011.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="174"&gt;
&lt;P&gt;-0.054 (on par)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HiFiNet generates audios faster.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Real Time Factor (RTF) is used to measure the performance of vocoder. It is calculated as the time duration needed to generate the audio divided by the audio duration. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;HiFiNet&lt;/EM&gt; is a parallel vocoder so it can generate multiple samples at the same time. Here are some measurements of &lt;EM&gt;HiFiNet&lt;/EM&gt; performance on both GPU and CPU devices. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the output of 24khz sampling rate, on M60 GPU, through carefully optimized &lt;A href="https://developer.nvidia.com/cuda-zone" target="_blank" rel="noopener"&gt;CUDA&lt;/A&gt; implementation, the vocoder RTF is around 0.01, which means the &lt;EM&gt;HiFiNet&lt;/EM&gt; system can generate an audio 10 second-long in 0.1 second. This speed is almost 3x of our previous production vocoder.&lt;/P&gt;
&lt;P&gt;On CPU machines, thanks to the highly-optimized &lt;A href="https://onnx.ai/" target="_blank" rel="noopener"&gt;ONNX&lt;/A&gt; runtime, the vocoder RTF is around 0.02 for 24khz sampling rate output.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the performance improvement of &lt;EM&gt;HiFiNet&lt;/EM&gt;, the end-to-end synthesis speed is about 2X as fast as our previous Neural TTS engine, which the audio quality is significantly improved at the same time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What to expect next&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Currently we support up to 24khz sampling rate on Azure Neural TTS service with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;68 neural voice models&lt;/A&gt; available. In some highly sophisticated scenarios like audio dubbing, higher fidelity output like 48khz sampling rate makes a world of difference. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below snippet from an audio spectrum shows the difference between 48hz sampling rate and 24khz. Audios with 48khz sampling rate get a higher frequency responding range which keeps more sophisticated details and nuances of the sound. Such high sampling rate creates challenges on both voice quality and inference speed.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="48khz frequency range.png" style="width: 992px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/231195i2148010E6A3DC434/image-size/large?v=v2&amp;amp;px=999" role="button" title="48khz frequency range.png" alt="24khz vs. 48khz: different frequency range" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;24khz vs. 48khz: different frequency range&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In our exploration, &lt;EM&gt;HiFiNet&lt;/EM&gt; can handle both challenges well.&amp;nbsp; According to our experiments, &lt;EM&gt;HiFiNet&lt;/EM&gt; vocoder on 48khz sampling rate can be trained to achieve even higher quality with reasonable inference speed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the difference of the audio fidelity between the TTS output in 24khz and 48khs sampling rate, &lt;STRONG&gt;with a hi-fi speaker or headset&lt;/STRONG&gt;. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="546px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="245px" height="30px"&gt;
&lt;P&gt;Language&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="152px" height="30px"&gt;
&lt;P&gt;24khz HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="30px"&gt;
&lt;P&gt;48khz HiFiNet&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD colspan="2"&gt;
&lt;P&gt;English (UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.hifinet-LongSentence-00001.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.hifinet_48k-LongSentence-00001.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="245px" height="57px"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="152px" height="57px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00013-hifinet24k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="148px" height="57px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00013-hifinet48k.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The 48khz vocoder is now in private preview and can be applied to custom voices. &amp;nbsp;Contact mstts [at] microsoft.com for details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create a custom voice with HiFiNet&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The HiFiNet vocoder is also available in the&amp;nbsp;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt;&amp;nbsp;capability, enabling organizations to create a unique brand voice in multiple languages for their unique scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;A href="https://aka.ms/customneural" target="_blank" rel="noopener"&gt;Learn more about the process for getting started with Custom Neural Voice&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get started&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering more natural and intuitive voice experiences for global customers. Text to Speech has more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;70 standard voices in over 40 languages&lt;/A&gt;&amp;nbsp;and locales in addition to our growing list of&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;Neural TTS voices&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our &lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Nov 2020 16:41:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-11-19T16:41:18Z</dc:date>
    </item>
    <item>
      <title>Apps can now narrate what they see in the world as well as people do</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/ba-p/1667146</link>
      <description>&lt;P&gt;&lt;SPAN&gt;How would you&amp;nbsp;leverage&amp;nbsp;technology&amp;nbsp;capable of&amp;nbsp;generating&amp;nbsp;natural language&amp;nbsp;image&amp;nbsp;descriptions&amp;nbsp;that are, in many cases,&amp;nbsp;just as good or better than what a human could&amp;nbsp;produce? What if&amp;nbsp;that&amp;nbsp;capability&amp;nbsp;is just one cloud&amp;nbsp;API&amp;nbsp;call away? Would you create live scene captions for people who are blind or low vision to better understand the world around them, like&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/ai/seeing-ai" target="_blank" rel="noopener"&gt;Seeing AI&lt;/A&gt;&lt;SPAN&gt;?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/" target="_blank" rel="noopener"&gt;Azure Cognitive Services&lt;/A&gt;, you can now take advantage of state-of-the-art image captioning that has achieved human parity on captioning benchmarks thanks to advancements in the underlying AI model. Below are some examples showing how the improved model is more accurate than the old one:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/ubpEUksa3v0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="border-style: hidden; width: 100%;" border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="35.22099447513812%" height="314px" style="border-style: hidden; width: 35.22099447513812%; height: 314px;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="press2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/226568iB09A69BDBDFDB6C2/image-size/large?v=v2&amp;amp;px=999" role="button" title="press2.png" alt="press2.png" /&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;TD width="64.77900552486187%" height="314px" style="border-style: hidden; width: 64.77900552486187%;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="press8.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/226569i99C2AD0C3FFE4385/image-size/large?v=v2&amp;amp;px=999" role="button" title="press8.png" alt="press8.png" /&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="35.22099447513812%" height="83px" style="border-style: hidden; width: 35.22099447513812%;"&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Improved model: A trolley on a city street&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Old model: a view of a city street&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="64.77900552486187%" height="83px" style="border-style: hidden; width: 64.77900552486187%;"&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Improved model: A person using a microscope&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Old model: A person sitting at a table using a laptop&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, let us take a closer look at the technology and how to easily harness its power for your users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Behind the Scenes of the Technology&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;The &lt;/SPAN&gt;novel object captioning at scale (&lt;A href="https://nocaps.org/" target="_blank" rel="noopener"&gt;nocaps&lt;/A&gt;) challenge &lt;SPAN&gt;evaluates AI models on their abilities to generate image captions describing new&lt;/SPAN&gt;&lt;SPAN&gt; objects that are not present in their training data. Microsoft’s Azure AI team pioneered the Visual Vocabulary (VIVO) pre-training technique that led to the industry first of &lt;/SPAN&gt;&lt;A href="https://evalai.cloudcv.org/web/challenges/challenge-page/355/leaderboard/1011" target="_blank" rel="noopener"&gt;surpassing human performance on the (nocaps) benchmark&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;Before we learn more about this innovation, we should understand Vision and Language Pre-training (VLP) first. It is a cross-modality (across vision and language) learning technique that uses large-scale image/sentence data pairs to train machine learning models capable of generating natural language captions for images. However, because visual concepts are learned from image/sentence pairs which are costly to obtain, it is difficult to train a broadly useful model with wide visual concept coverage. This is where VIVO pre-training comes in. It improves and extends VLP to allow rich visual concepts to be learned from easier to obtain image/word pairs (instead of sentence) to build a large-scale visual vocabulary. While natural language sentence generation is still trained with limited visual concepts, the resulting image caption is cleverly enriched by new objects from the large-scale visual vocabulary.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture1.png" style="width: 755px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222986iEFEE7A610A3321CC/image-dimensions/755x437?v=v2" width="755" height="437" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN class="EOP SCXW21534902 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="EOP SCXW43134901 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;Figure 1: VIVO pre-training uses paired image-tag data to learn a rich visual vocabulary where image region features and tags of the same object are aligned. Fine-tuning is conducted on paired image-sentence data that only cover a limited number of objects (in blue). During inference, our model can generalize to describe novel objects (in yellow) that are learnt during VIVO pre-training.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please see this &lt;A href="https://aka.ms/MSRBlogImageCap" target="_blank" rel="noopener"&gt;MSR blog post&lt;/A&gt; to learn more about VIVO pre-training.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Try the Service in Your App&lt;/H2&gt;
&lt;P&gt;Imagine you would like to generate alternative text descriptions for images your users upload to your app. Azure Computer Vision Service with its much improved “describe image” (image captioning) capability can help. Let us take it for a spin.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We will be using Python client library to invoke the service in this blog post. Try these links if you prefer a &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/client-library?pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;different language&lt;/A&gt; or invoking the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/curl-analyze" target="_blank" rel="noopener"&gt;REST API&lt;/A&gt; directly. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Prerequisites&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.python.org/downloads/" target="_blank" rel="noopener"&gt;Python&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;An Azure subscription -&amp;nbsp;&lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;create one for free&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Once you have your Azure subscription, &lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" target="_self"&gt;create a Computer Vision resource&lt;/A&gt;:
&lt;UL&gt;
&lt;LI&gt;Subscription: Pick the subscription you would like to use. If you just created a new Azure subscription, it should be an option in the dropdown menu.&lt;/LI&gt;
&lt;LI&gt;Resource group: Pick an existing one or create a new one.&lt;/LI&gt;
&lt;LI&gt;Region: Pick the region you would like your resource to be in.&lt;/LI&gt;
&lt;LI&gt;Name: Give your resource a unique name.&lt;/LI&gt;
&lt;LI&gt;Pricing tier: You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.&lt;/LI&gt;
&lt;LI&gt;Then click on “&lt;STRONG&gt;Review + create&lt;/STRONG&gt;” to review your choices and click on “&lt;STRONG&gt;Create&lt;/STRONG&gt;” again to deploy the resource&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture2.png" style="width: 665px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222988i7E0ACD0524F8E985/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once your resource is deployed, click&amp;nbsp;“&lt;STRONG&gt;Go to resource&lt;/STRONG&gt;.”&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 385px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222989i654978B497385668/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click on “&lt;STRONG&gt;Keys and Endpoint&lt;/STRONG&gt;” to get your subscription key and endpoint. You will be needing these for the code sample below.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture4.png" style="width: 703px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222990i9FEF3DBBCCC6B2A7/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture4.png" alt="Picture4.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Install the client&lt;/H3&gt;
&lt;P&gt;You can install the client library with:&lt;/P&gt;
&lt;PRE&gt;pip install --upgrade azure-cognitiveservices-vision-computervision&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Create and run the sample&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Copy the following code into a text editor.&lt;/LI&gt;
&lt;LI&gt;Optionally, replace the value of remote_image_url with the URL of a different image for which to generate caption.&lt;/LI&gt;
&lt;LI&gt;Also, optionally, set useRemoteImage to FALSE and set local_image_path to the path of a local image for which to generate caption.&lt;/LI&gt;
&lt;LI&gt;Save the code as a file with an .py extension. For example, describe-image.py.&lt;/LI&gt;
&lt;LI&gt;Open a command prompt window.&lt;/LI&gt;
&lt;LI&gt;At the prompt, use the python command to run the sample. For example, python describe-image.py.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;import sys

from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials

# Best practice is to read this key from secure storage, 
# for this example we'll embed it in the code.
subscription_key = "&amp;lt;your subscription key here&amp;gt;"
endpoint = "&amp;lt;your endpoint here&amp;gt;"

# Create the computer vision client
computervision_client = ComputerVisionClient(
    endpoint, CognitiveServicesCredentials(subscription_key))

# Set to False if you want to use local image instead
useRemoteImage = True

if (useRemoteImage):
    # Get caption for a remote image, change to your own image URL as appropriate
    remote_image_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg"
    description_results = computervision_client.describe_image(
        remote_image_url)
else:
    # Get caption for a local image, change to your own local image path as appropriate
    local_image_path = "&amp;lt;replace with local image path&amp;gt;"
    with open(local_image_path, "rb") as image:
        description_results = computervision_client.describe_image_in_stream(
            image)

# Get the first caption (description) from the response
if (len(description_results.captions) == 0):
    image_caption = "No description detected."
else:
    image_caption = description_results.captions[0].text

print("Description of image:", image_caption)
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Related&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://aka.ms/AA99bjt" target="_self"&gt;What’s that? Microsoft AI system describes images as well as people do&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Learn more about other &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision" target="_blank" rel="noopener"&gt;Computer Vision&lt;/A&gt; capabilities.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Oct 2020 00:21:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/apps-can-now-narrate-what-they-see-in-the-world-as-well-as/ba-p/1667146</guid>
      <dc:creator>boxinli</dc:creator>
      <dc:date>2020-10-15T00:21:12Z</dc:date>
    </item>
    <item>
      <title>Real-time Inference on NVIDIA GPUs in Azure Machine Learning (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/real-time-inference-on-nvidia-gpus-in-azure-machine-learning/ba-p/1737522</link>
      <description>&lt;P&gt;AI today is about scale: models with billions of parameters used by millions of people. &lt;A href="https://azure.com/ml" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt; is built to support your delivery of AI-powered experiences at scale. With our notebook-based authoring experience, our low-code and no-code training platform, our responsible AI integrations, and our industry-leading ML Ops capabilities, we give you the ability to develop large machine learning models easily, responsibly, and reliably.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One key component of employing AI in your business is model &lt;EM&gt;serving&lt;/EM&gt;. Once you have trained a model and assessed it per&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml" target="_blank" rel="noopener"&gt;responsible machine learning principles&lt;/A&gt;, you need to quickly process requests for predictions, for many users at a time. While serving models on general-purpose CPUs can work well for less complex models serving fewer users, those of you with a significant reliance on real-time AI predictions have been asking us how you can leverage GPUs to scale more effectively.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;That is why today, we are partnering with NVIDIA to announce the availability of the &lt;A href="https://developer.nvidia.com/nvidia-triton-inference-server" target="_blank" rel="noopener"&gt;Triton Inference Server&lt;/A&gt; in Azure Machine Learning to deliver cost-effective, turnkey GPU inferencing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are three components to serving an AI model at scale: server, runtime, and hardware. This new Triton server, together with &lt;A href="https://onnxruntime.ai" target="_blank" rel="noopener"&gt;ONNX Runtime&lt;/A&gt; and NVIDIA GPUs on Azure, complements Azure Machine Learning’s support for developing AI models at scale by giving you the ability to serve AI models to many users cheaply and with low latency. Below, we go into detail about each of the three components to serving AI models at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Server&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Triton Inference Server in Azure Machine Learning can, through server-side mini batching, achieve significantly higher throughput than can a general-purpose Python server like Flask.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gopalv_0-1601598975457.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223614i1A8448A624BDE23B/image-size/large?v=v2&amp;amp;px=999" role="button" title="gopalv_0-1601598975457.png" alt="gopalv_0-1601598975457.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Triton can support models in ONNX, PyTorch, TensorFlow, and Caffe2, giving your data scientists the freedom to explore any framework of interest to them during training time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Runtime&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For even better performance, serve your models in ONNX Runtime, a high-performance runtime for both training (in preview) and inferencing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gopalv_1-1601598975463.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223615iD764B20AFA3011A2/image-size/large?v=v2&amp;amp;px=999" role="button" title="gopalv_1-1601598975463.png" alt="gopalv_1-1601598975463.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-right"&gt;&lt;A href="https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT/triton" target="_self"&gt;&lt;EM&gt;Numbers Courtesy of NVIDIA&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;ONNX Runtime is used by default when serving ONNX models in Triton, and you can convert &lt;A href="https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html" target="_blank" rel="noopener"&gt;PyTorch&lt;/A&gt;, &lt;A href="https://github.com/onnx/tensorflow-onnx" target="_blank" rel="noopener"&gt;TensorFlow&lt;/A&gt;, and &lt;A href="http://onnx.ai/sklearn-onnx/" target="_blank" rel="noopener"&gt;Scikit-learn&lt;/A&gt; models to ONNX.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Hardware&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;NVIDIA Tesla T4 GPUs in Azure provide a hardware-accelerated foundation for a wide variety of models and inferencing performance demands. The NC T4 v3 series is a new, lightweight GPU-accelerated VM, offering a cost-effective option for customers performing real-time or small batch inferencing who may not need the throughput afforded by larger GPU sizes such as the V100-powered ND v2 and NC v3-series VMs, and desire a wider regional deployment footprint.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gopalv_2-1601598975466.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223613i357739DF6FEBBB6C/image-size/large?v=v2&amp;amp;px=999" role="button" title="gopalv_2-1601598975466.png" alt="gopalv_2-1601598975466.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new NCasT4_v3 VMs are &lt;A href="https://aka.ms/NCT4v3Preview" target="_blank" rel="noopener"&gt;currently available for preview&lt;/A&gt; in the West US 2 region, with 1 to 4 NVIDIA Tesla T4 GPUs per VM, and will soon expand in availability with &lt;A href="https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines" target="_blank" rel="noopener"&gt;over a dozen planned regions&lt;/A&gt; across North America, Europe and Asia.&lt;/P&gt;
&lt;P&gt;To learn more about NCasT4_v3-series virtual machines, visit the&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/virtual-machines/nct4-v3-series" target="_blank" rel="noopener"&gt;NCasT4_v3-series documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Easy to Use&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using Triton Inference Server with ONNX Runtime in Azure Machine Learning is simple. Assuming you have a &lt;A href="https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/model_repository.html" target="_blank" rel="noopener"&gt;Triton Model Repository&lt;/A&gt; with a parent directory triton and an Azure Machine Learning &lt;A href="https://docs.microsoft.com/azure/machine-learning/reference-azure-machine-learning-cli#deployment-configuration-schema" target="_blank" rel="noopener"&gt;deploymentconfig.json&lt;/A&gt;, run the commands below to register your model and deploy a webservice.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;az ml model register -n triton_model -p triton --model-framework=Multi
az ml model deploy -n triton-webservice -m triton_model:1 --dc deploymentconfig.json --compute-target aks-gpu&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Next Steps&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, you have seen how Azure Machine Learning can enable your business to serve large AI models to many users simultaneously. By bringing together a high-performance inference server, a high-performance runtime, and high-performance hardware, we give you the ability to serve many requests per second at millisecond latencies while saving money.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To try this new offering yourself:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Sign up for an &lt;A href="https://azure.microsoft.com/en-us/free/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning trial&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/triton-aml-sample" target="_blank" rel="noopener"&gt;Clone our samples repository on GitHub&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/triton-aml-docs" target="_blank" rel="noopener"&gt;Read our documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Be sure to &lt;A href="https://feedback.azure.com/forums/257792-machine-learning" target="_blank" rel="noopener"&gt;let us know what you think&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;You can also request access to the new NCasT4_v3 VM series (In Preview) by &lt;A href="https://aka.ms/NCT4v3Preview" target="_blank" rel="noopener"&gt;applying here.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Oct 2020 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/real-time-inference-on-nvidia-gpus-in-azure-machine-learning/ba-p/1737522</guid>
      <dc:creator>gopalv</dc:creator>
      <dc:date>2020-10-05T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Azure Machine Learning's native support for MLflow</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-machine-learning-s-native-support-for-mlflow/ba-p/1737491</link>
      <description>&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Azure Machine Learning service expands support for MLflow (Public Preview)&lt;/H2&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Background&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Many data scientists start their machine learning projects using Jupyter notebooks or editors like Visual Studio Code. &lt;/SPAN&gt;&lt;SPAN&gt;To ensure models can be used in production, it is essential to systematically track all aspects of an ML workflow, such as the data, environment, code, and models produced. &lt;/SPAN&gt;These challenges with reproducibility can become complex when working in a hybrid cloud environment – but are mitigated if both environments conform to open standards.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;AzureML’s support for MLflow&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;Azure ML now supports managing the end to end machine learning lifecycle using open &lt;/SPAN&gt;&lt;A href="https://www.mlflow.org/" target="_self"&gt;MLflow&lt;/A&gt;&amp;nbsp;&lt;SPAN&gt;standards, enabling existing workloads to seamlessly move from local execution to the intelligent cloud &amp;amp; edge.&lt;/SPAN&gt; &lt;SPAN&gt;Azure Machine Learning has expanded support for running machine learning workflows to train, register and deploy models via native integration (API compatibility) with MLflow. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Let’s walk through some of the latest enhancements to the Azure ML and MLflow interoperability. &lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;MLflow Projects&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;A href="https://www.mlflow.org/docs/latest/projects.html" target="_self"&gt;&lt;SPAN&gt;MLflow Projects&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;provide a way to organize and describe your code to enable other data scientists or automated tools to run it. Any local directory or Git repository can be treated as an MLflow project. You can enable MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to connect your MLflow experiments and Azure Machine Learning. You can submit your MLflow experiments locally or remotely using MLflow Projects&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;with full tracking support in AzureML by setting the project backend to “azureml”.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;A project includes the following:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Conda environment specification (conda.yaml)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Any .py or .sh file in the project can be an entry point, with no parameters explicitly declared. When you run the command with a set of parameters, MLflow passes each parameter on the command line using --key &amp;lt;value&amp;gt; syntax.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;You specify more options by adding an MLproject file, which is a text file in YAML syntax. An example MLproject file looks like this:&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="yaml"&gt;name: tutorial
conda_env: conda.yaml
entry_points:

  main:
    parameters:
      alpha: float
      l1_ratio: {type: float, default: 0.1}
    command: "python train.py {alpha} {l1_ratio}"
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Here’s an example set up for a local run. I’ve set the backend to &lt;STRONG&gt;“azureml”&lt;/STRONG&gt; to get all the tracking support and error logging from Azure ML. The backend config object is used to store necessary information such as the compute target, local managed environment or a system managed environment.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;local_env_run = mlflow.projects.run(uri=".",
                                    parameters={"alpha":0.3},
                                    backend = "azureml",
                                    use_conda=False,
                                    backend_config = {"USE_CONDA": False})
                                    
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In the image below you can see that Azure ML automatically tags the run with MLflow related metadata for visibility and logs the git info.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_0-1601595922835.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223593i7B27AE121587D99F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_0-1601595922835.png" alt="shivpat_0-1601595922835.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You can then log and visualize your run metrics in Azure Machine Learning Studio or the MLflow Experimentation UI.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_1-1601595922840.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223595i5516EFD1FA6FCF4C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_1-1601595922840.png" alt="shivpat_1-1601595922840.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can see the same metrics in the Azure ML studio and MLflow UI.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_2-1601595922845.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223594iA1797F37FDF1E8A1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_2-1601595922845.png" alt="shivpat_2-1601595922845.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;SPAN&gt;MLflow Model Registry and Deployment&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;With the new support for the MLflow model format, it becomes even easier to track and deploy models on Azure ML. You can register models from local files or a run and use it to make predictions online or in batch mode.&amp;nbsp;&lt;SPAN&gt;By deploying models as a web service, you can apply the Azure Machine Learning monitoring and data drift detection functionalities to your production models. Let's look at an example:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From the MLflow project run, you can see the output model from the projects run is registered following the MLflow model schema.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_3-1601595922850.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223596i7EA07E6251EE1537/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_3-1601595922850.png" alt="shivpat_3-1601595922850.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The MLmodel file contains all the model details and metadata.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_4-1601595922854.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223597iF95C07990F608D88/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_4-1601595922854.png" alt="shivpat_4-1601595922854.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to register, containerize, and deploy the model, you can now do that in one step. Using the &lt;A href="http://%20https://mlflow.org/docs/latest/models.html#deploy-a-python-function-model-on-microsoft-azure-ml" target="_self"&gt;mlflow.azureml.deploy() Python SDK method&lt;/A&gt;, &amp;nbsp;AzureML will register the model in AzureML, build the docker container and deploy it to the chosen target. The deployed service will also retain the MLflow metadata as tags as show in the image below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shivpat_5-1601595922861.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/223598iF1B342BC081A3A4D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="shivpat_5-1601595922861.png" alt="shivpat_5-1601595922861.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the continuous support for MLflow, Azure ML is committed to being interoperable with Open source standards providing flexibility for users to work on-prem or on the cloud. To get more details about the Mlflow and Azure ML integration check out the following links:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow" target="_self"&gt;How to use MLflow with Azure Machine Learning&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow" target="_self"&gt;MLflow and Azure Machine Learning notebook examples&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow" target="_self"&gt;Framework Specific notebooks&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Oct 2020 16:31:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-machine-learning-s-native-support-for-mlflow/ba-p/1737491</guid>
      <dc:creator>shivani-patel</dc:creator>
      <dc:date>2020-10-02T16:31:23Z</dc:date>
    </item>
    <item>
      <title>Simplify and accelerate AI for the entire data science team with Azure Machine Learning designer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/simplify-and-accelerate-ai-for-the-entire-data-science-team-with/ba-p/1718404</link>
      <description>&lt;P&gt;At &lt;A href="https://www.microsoft.com/ignite" target="_blank" rel="noopener"&gt;Microsoft Ignite&lt;/A&gt;, we announced the general availability of Azure Machine Learning designer, the drag-and-drop workflow capability in Azure Machine Learning studio which simplifies and accelerates the process of building, testing, and deploying machine learning models for the entire data science team, from beginners to professionals. We launched the preview in November 2019, and we have been excited with the strong customer interest. We listened to our customers and appreciated all the feedback. Your responses helped us reach this milestone. Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AMLdesigner banner.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222996i9C6FC7F81FA684CC/image-size/large?v=v2&amp;amp;px=999" role="button" title="AMLdesigner banner.png" alt="AMLdesigner banner.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;“By using Azure Machine Learning designer we were able to quickly release a valuable tool built on machine learning insights, that predicted occupancy in trains, promoting social distancing in the fight against Covid-19.” - &lt;/EM&gt;&lt;EM&gt;Steffen Pedersen, Head of AI and advanced analytics, DSB (Danish State Railways&lt;/EM&gt;&lt;EM&gt;)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Artificial intelligence (AI) is gaining momentum in all industries. Enterprises today are adopting AI at a rapid pace with different skill sets of people, from business analysts, developers, data scientists to machine learning engineers. The drag-and-drop experience in Azure Machine Learning designer can help your entire data science team to speed up machine learning model building and deployment. Specially, it is tailored for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data scientists who are more familiar with visual tools than coding.&lt;/LI&gt;
&lt;LI&gt;Users who are new to machine learning and want to learn it in an intuitive way.&lt;/LI&gt;
&lt;LI&gt;Machine learning experts who are interested in rapid prototyping.&lt;/LI&gt;
&lt;LI&gt;Machine learning engineers who need a visual workflow to manage model training and deployment.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Connect and prepare data with ease&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning designer is fully integrated with Azure Machine Learning dataset service for the benefit of versioning, tracking and data monitoring. You can &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets" target="_blank" rel="noopener"&gt;import data&lt;/A&gt; by dragging and dropping a registered dataset from the asset library, or connecting to various data sources including HTTP URL, Azure blob, Azure Data Lake, Azure SQL or upload from a local file with Import Data module . You can use right click to preview and visualize the data profile, and preprocess data using a rich set of built-in modules for data transformation and feature engineering.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Connect data.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222561iD23B3A96D53429D2/image-size/large?v=v2&amp;amp;px=999" role="button" title="Connect data.png" alt="Connect data.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Build and train models with no-code/low-code&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Azure Machine Learning designer, you can build and train machine learning models with state-of-the art machine learning and deep learning algorithms, including those for traditional machine learning, computer vision, text analytics, recommendation and anomaly detection. You can also use &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-designer-python" target="_self"&gt;customized Python and R code&lt;/A&gt; to build your own models. Each module can be configured to run on different Azure Machine Learning compute clusters so data scientists don’t need to worry about the scaling limitation and can focus on their training work.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Train model.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222538iF53E1EC4DA62E3C8/image-size/large?v=v2&amp;amp;px=999" role="button" title="Train model.png" alt="Train model.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Validate and evaluate model performance&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can evaluate and compare your trained model performance with a few clicks using the built-in evaluate model modules, or use execute Python/R script modules to &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-designer-experiments" target="_blank" rel="noopener"&gt;log the customized metrics/images&lt;/A&gt;. All metrics are stored in run history and can be compared among different runs in the studio UI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Evaluate model.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222682iA8A3384140A57927/image-size/large?v=v2&amp;amp;px=999" role="button" title="Evaluate model.png" alt="Evaluate model.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Root cause analysis with immersed debugging experience&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;While interactively running machine learning pipelines, you can always perform quick root cause analysis using the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/graph-search-syntax" target="_blank" rel="noopener"&gt;graph search and navigation&lt;/A&gt;&amp;nbsp;to quickly nailed down to the failed step, preview logs and outputs for debugging and troubleshooting without losing context of the pipeline, and find snapshots to trace scripts and dependencies used to run the&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;pipeline.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Debug and troubleshoot.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222974iAC4E10CAEC1C6E30/image-size/large?v=v2&amp;amp;px=999" role="button" title="Debug and troubleshoot.gif" alt="Debug and troubleshoot.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Deploy models and publish endpoints with a few clicks&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data scientists and machine learning engineers can deploy models for real-time and batch inferencing as versioned REST endpoints to their own environment. You don’t need to worry about the deep knowledge of coding, model management, container services, etc., as scoring files and the deployment image are automatically generated with a few clicks. Models and other assets can also be registered in the central registry for MLOps tracking, lineage, and automation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Deploy.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222683i5128488229D3D94D/image-size/large?v=v2&amp;amp;px=999" role="button" title="Deploy.png" alt="Deploy.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started today&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started today with your new &lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;Azure free trial&lt;/A&gt;, and learn more about &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/designer/" target="_blank" rel="noopener"&gt;Azure Machine Learning designer&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 12:33:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/simplify-and-accelerate-ai-for-the-entire-data-science-team-with/ba-p/1718404</guid>
      <dc:creator>Lu_Zhang</dc:creator>
      <dc:date>2020-09-30T12:33:44Z</dc:date>
    </item>
    <item>
      <title>Leveling-up Local Experiment Runs with the VS Code AML Extension</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/leveling-up-local-experiment-runs-with-the-vs-code-aml-extension/ba-p/1725975</link>
      <description>&lt;DIV&gt;&lt;SPAN&gt;Hey&amp;nbsp;AzML&amp;nbsp;community!&amp;nbsp;The&amp;nbsp;VS&amp;nbsp;Code&amp;nbsp;team&amp;nbsp;is&amp;nbsp;excited&amp;nbsp;to&amp;nbsp;announce&amp;nbsp;version&amp;nbsp;0.6.15&amp;nbsp;of&amp;nbsp;the&amp;nbsp;AzML&amp;nbsp;extension,&amp;nbsp;with&amp;nbsp;a&amp;nbsp;brand&amp;nbsp;new&amp;nbsp;way&amp;nbsp;for&amp;nbsp;you&amp;nbsp;to&amp;nbsp;validate&amp;nbsp;your&amp;nbsp;scripts,&amp;nbsp;environments,&amp;nbsp;and&amp;nbsp;datasets&amp;nbsp;before&amp;nbsp;submitting&amp;nbsp;to&amp;nbsp;a&amp;nbsp;remote&amp;nbsp;cluster.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If you'd like to follow along&amp;nbsp;with&amp;nbsp;the&amp;nbsp;blog&amp;nbsp;post&amp;nbsp;and&amp;nbsp;try&amp;nbsp;out&amp;nbsp;the&amp;nbsp;new&amp;nbsp;features,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;install&amp;nbsp;the&amp;nbsp;extension &lt;A href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;here!&lt;/A&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Gaining confidence in your experiment runs&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Experiencing&amp;nbsp;a&amp;nbsp;sense&amp;nbsp;of&amp;nbsp;worry&amp;nbsp;or&amp;nbsp;anxiety&amp;nbsp;when&amp;nbsp;submitting&amp;nbsp;a&amp;nbsp;remote&amp;nbsp;experiment&amp;nbsp;is&amp;nbsp;common&amp;nbsp;and&amp;nbsp;expected.&amp;nbsp;It's&amp;nbsp;hard&amp;nbsp;to&amp;nbsp;predict&amp;nbsp;how&amp;nbsp;the&amp;nbsp;training&amp;nbsp;script&amp;nbsp;you've&amp;nbsp;been&amp;nbsp;working&amp;nbsp;very&amp;nbsp;hard&amp;nbsp;on&amp;nbsp;is&amp;nbsp;going&amp;nbsp;to&amp;nbsp;behave&amp;nbsp;once&amp;nbsp;it&amp;nbsp;runs&amp;nbsp;on&amp;nbsp;your&amp;nbsp;remote&amp;nbsp;target.&amp;nbsp;Many&amp;nbsp;of&amp;nbsp;you&amp;nbsp;have&amp;nbsp;expressed&amp;nbsp;pain&amp;nbsp;in&amp;nbsp;not:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Knowing whether the environment you want to use will correctly work with your training script.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Knowing whether your datasets are materialized and being referenced correctly.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Having the confidence to submit your remote experiment and context-switch to another project you're working on.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;VS&amp;nbsp;Code&amp;nbsp;AzML&amp;nbsp;extension&amp;nbsp;team&amp;nbsp;has&amp;nbsp;been&amp;nbsp;working&amp;nbsp;hard&amp;nbsp;over&amp;nbsp;the&amp;nbsp;past&amp;nbsp;few&amp;nbsp;weeks&amp;nbsp;to&amp;nbsp;bring&amp;nbsp;a&amp;nbsp;new&amp;nbsp;capability&amp;nbsp;to&amp;nbsp;alleviate&amp;nbsp;your&amp;nbsp;pains:&amp;nbsp;&lt;STRONG&gt;running a local experiment with an interactive debugging session.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN style="font-family: inherit;"&gt;&lt;SPAN style="font-family: inherit;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Interactive Debugging Smaller.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222935i6CA4507AB2DF796A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Interactive Debugging Smaller.gif" alt="Interactive Debugging with the AML Extension" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Interactive Debugging with the AML Extension&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;You&amp;nbsp;might&amp;nbsp;be&amp;nbsp;asking&amp;nbsp;yourself&amp;nbsp;-&amp;nbsp;how&amp;nbsp;is&amp;nbsp;this&amp;nbsp;different&amp;nbsp;from&amp;nbsp;my&amp;nbsp;running&amp;nbsp;my&amp;nbsp;training&amp;nbsp;script&amp;nbsp;in&amp;nbsp;VS&amp;nbsp;Code? Here are some key differences:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;The AzML service always uses an environment when submitting a remote run. These environments are materialized as Docker containers. When running a local experiment, the AzML extension will build the&amp;nbsp;&lt;STRONG&gt;same Docker image&lt;/STRONG&gt; and&amp;nbsp;&lt;STRONG&gt;same Docker container&lt;/STRONG&gt; that's used when running remotely.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Running a Python script normally assumes that you've taken care of data materialization and access. When running remotely, you're recommended to use AzML Datasets giving you the advantage of working with&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-with-datasets" target="_blank" rel="noopener"&gt;helper functions and configuration options&lt;/A&gt;. The extension enables you to configure a local run and work with Datasets the&amp;nbsp;&lt;STRONG&gt;same way in which you would remotely,&amp;nbsp;&lt;/STRONG&gt;helping you validate that your dataset is being used correctly.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The extension streamlines setting up an optional debug session when running your experiment. This allows you to set breakpoints and step through your code with ease.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;The extension has tightly coupled components of the debugging experience, like the &lt;A href="https://code.visualstudio.com/Docs/editor/debugging#_debug-console-repl" target="_blank" rel="noopener"&gt;debug console&lt;/A&gt;, with your experiment. Expressions you evaluate or print to the console will be written in your 70_driver_log.txt.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Running&amp;nbsp;a&amp;nbsp;local&amp;nbsp;experiment&amp;nbsp;is&amp;nbsp;straightforward&amp;nbsp;and&amp;nbsp;closely&amp;nbsp;resembles&amp;nbsp;the&amp;nbsp;extension's&amp;nbsp;current&amp;nbsp;functionality&amp;nbsp;for&amp;nbsp;submitting&amp;nbsp;a&amp;nbsp;remote&amp;nbsp;run.&amp;nbsp;Here's&amp;nbsp;a&amp;nbsp;summary&amp;nbsp;of&amp;nbsp;the&amp;nbsp;steps&amp;nbsp;for&amp;nbsp;submitting&amp;nbsp;a&amp;nbsp;run.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Right-click on an experiment node in the tree view and choose the&amp;nbsp;&lt;EM&gt;Run Experiment&lt;/EM&gt; option.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Pick the local run option and choose whether you want to debug.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Create a new run configuration or pick a previous created one. The rest of the steps assume the former.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Pick an environment and dataset for your training.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;(Only when debugging) Add the &lt;A href="https://github.com/microsoft/debugpy" target="_blank" rel="noopener"&gt;debugpy&lt;/A&gt;&amp;nbsp;package to your environment. Debugpy is required when running an interactive debug session.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Validate the final configuration options and submit your run.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;(Optional) If you've chosen to debug, start the debugger via the prompt or from your run node.&lt;/SPAN&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Local Experiment GIF.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222930i521E91ED2AC1C5D1/image-size/large?v=v2&amp;amp;px=999" role="button" title="Local Experiment GIF.gif" alt="Local Experiment Submission with AML Extension" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Local Experiment Submission with AML Extension&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Congratulations!&amp;nbsp;If&amp;nbsp;you've&amp;nbsp;followed&amp;nbsp;the&amp;nbsp;above&amp;nbsp;steps&amp;nbsp;you've&amp;nbsp;successfully&amp;nbsp;submitted&amp;nbsp;a&amp;nbsp;local&amp;nbsp;experiment&amp;nbsp;and&amp;nbsp;can&amp;nbsp;now&amp;nbsp;confidently&amp;nbsp;proceed&amp;nbsp;to&amp;nbsp;submit&amp;nbsp;a&amp;nbsp;remote&amp;nbsp;run.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;For&amp;nbsp;more&amp;nbsp;detailed&amp;nbsp;step-by-step&amp;nbsp;instructions&amp;nbsp;you&amp;nbsp;can&amp;nbsp;follow&amp;nbsp;our &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-visual-studio-code" target="_blank" rel="noopener"&gt;docs here&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Feedback&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;We're&amp;nbsp;working&amp;nbsp;hard&amp;nbsp;to&amp;nbsp;further&amp;nbsp;improve&amp;nbsp;your&amp;nbsp;run&amp;nbsp;experience&amp;nbsp;from&amp;nbsp;within&amp;nbsp;VS&amp;nbsp;Code,&amp;nbsp;with&amp;nbsp;focus&amp;nbsp;on&amp;nbsp;the&amp;nbsp;following&amp;nbsp;scenarios:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Debugging a single-node remote run on AmlCompute targets.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Streamlining submitting a remote run after succeeding locally.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Streamlining running a local debug experiment from a failed remote run.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;there's&amp;nbsp;anything&amp;nbsp;that&amp;nbsp;you&amp;nbsp;would&amp;nbsp;like&amp;nbsp;us&amp;nbsp;to&amp;nbsp;prioritize,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;let&amp;nbsp;us&amp;nbsp;know&amp;nbsp;on &lt;A href="https://github.com/microsoft/vscode-tools-for-ai/issues/new" target="_blank" rel="noopener"&gt;Github&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;you&amp;nbsp;would&amp;nbsp;like&amp;nbsp;to&amp;nbsp;provide&amp;nbsp;feedback&amp;nbsp;on&amp;nbsp;the&amp;nbsp;overall&amp;nbsp;extension,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;do&amp;nbsp;so&amp;nbsp;via&amp;nbsp;our &lt;A href="http://aka.ms/aml-ext-survey" target="_blank" rel="noopener"&gt;survey&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Tue, 29 Sep 2020 22:57:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/leveling-up-local-experiment-runs-with-the-vs-code-aml-extension/ba-p/1725975</guid>
      <dc:creator>Sid_Unnithan</dc:creator>
      <dc:date>2020-09-29T22:57:18Z</dc:date>
    </item>
    <item>
      <title>Microsoft named a leader in Forrester’s Notebook-based Predictive Analytics &amp; Machine Learning Wave</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-forrester-s-notebook-based/ba-p/1718391</link>
      <description>&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="157464_1q.gif" style="width: 755px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222922i4A88CAF964665932/image-size/large?v=v2&amp;amp;px=999" role="button" title="157464_1q.gif" alt="157464_1q.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Forrester Research recently released their &lt;A href="https://aka.ms/forrester-PAML" target="_blank" rel="noopener"&gt;Wave report for Notebook-based Predictive Analytics and Machine Learning&lt;/A&gt;. &lt;A href="https://azure.microsoft.com/en-us/" target="_blank" rel="noopener"&gt;Microsoft Azure&lt;/A&gt; is named a leader in this Wave, receiving the highest score possible in the ability to execute criteria and rated highest in the strategy category. You can download a complimentary copy of &lt;A href="https://aka.ms/forrester-PAML" target="_blank" rel="noopener"&gt;The Forrester Wave™ for Notebook-based Predictive Analytics and Machine Learning Solutions, Q3 2020 report here&lt;/A&gt;. In this post we’ll look at why Forrester rated Microsoft Azure as a leader.&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Forrester evaluated &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;, recognizing its ‘full suite of enterprise PAML capabilities, from centralized model registries to hyperparameter tuning and modular model training and deployment pipelines.’ Forrester gave Azure Machine Learning the highest possible score in 13 evaluation metrics, the most in the report.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;
&lt;P&gt;&lt;EM&gt;“&lt;/EM&gt;&lt;STRONG&gt;The major cloud vendors have long had a gap in offering a comprehensive PAML platform that meets the full set of enterprise data science team needs, to the detriment of bewildered customers who have had to build or find their own solutions. Microsoft has filled that gap and then some.&lt;/STRONG&gt;&lt;EM&gt;”&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-right"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-right"&gt;&lt;EM&gt;-&amp;nbsp; The Forrester Wave™:&amp;nbsp; Notebook-based Predictive Analytics and Machine Learning, Q3 2020&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s take a closer look at the capabilities that Azure Machine Learning features:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Collaboration and productivity – Azure Machine Learning offers customers a &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace" target="_blank" rel="noopener"&gt;collaborative workspace&lt;/A&gt; with a &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks" target="_blank" rel="noopener"&gt;dedicated notebook-based machine learning experience&lt;/A&gt; along with &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance" target="_blank" rel="noopener"&gt;integrated compute (CPU and GPU) environments&lt;/A&gt; with support for all open source tools, frameworks and libraries. The Azure Machine Learning SDK, offered for both &lt;A href="https://docs.microsoft.com/en-us/python/api/overview/azure/ml/?view=azure-ml-py" target="_blank" rel="noopener"&gt;Python&lt;/A&gt; and &lt;A href="https://azure.github.io/azureml-sdk-for-r/reference/index.html" target="_blank" rel="noopener"&gt;R&lt;/A&gt;, also makes it simple to for people using tools like Jupyter, VS Code or any other notebook environment/IDE, to collaborate and reap the benefits of Azure Machine Learning. This makes data scientists productive from day 1.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Comprehensive coverage of the ML lifecycle – Azure Machine Learning helps with every step of the ML lifecycle. From data preparation and modelling, through to deployment and monitoring, every aspect of machine learning is carefully optimized with features like &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-images" target="_blank" rel="noopener"&gt;data labelling&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters" target="_blank" rel="noopener"&gt;hyperdrive&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines" target="_blank" rel="noopener"&gt;pipelines&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets" target="_blank" rel="noopener"&gt;drift monitoring&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-responsible-ml" target="_blank" rel="noopener"&gt;responsible ML&lt;/A&gt; toolkits. Azure Machine Learning also brings optimizers for popular frameworks and libraries to ensure that the training process runs optimally.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Enterprise readiness – Azure Machine Learning is a service that enables operationalization of ML models irrespective of how stringent the criteria is. Offering a best-in-class MLOps experience, Azure Machine Learning is equipped to help implement a robust deployment pipeline with automated monitoring and retraining capabilities. Azure Machine Learning also offers the security and governance features like &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;private link&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security#azure-cosmos-db" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;customer managed keys&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-training-vnet#compute-instance" target="_blank" rel="noopener"&gt;VNet&lt;/A&gt;, &lt;SPAN&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;role-based access control (RBAC)&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;. &lt;/SPAN&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Ecosystem – Azure Machine Learning is part of growing ecosystem of services with Azure’s Data &amp;amp; AI offerings. It integrates natively with services like &lt;A href="https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-analyze-with-azure-machine-learning" target="_blank" rel="noopener"&gt;Azure Synapse Analytics&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow-azure-databricks" target="_blank" rel="noopener"&gt;Azure Databricks&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/power-bi/transform-model/service-machine-learning-integration" target="_blank" rel="noopener"&gt;Power BI&lt;/A&gt;, to offer customers the flexibility to leverage the engine of their choice, like Apache Spark™ and/or SQL, for data wrangling and model scoring. It also brings a strong partner ecosystem and a dedicated certification accompanied by self-paced learning courses on Microsoft Learn and Udacity.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We feel Azure Machine Learning is the best environment for any organization that is building an ML practice with a code-first approach. Be it with notebooks or IDEs, the Azure Machine Learning Studio and the accompanying SDK, makes the Azure Machine Learning capabilities omnipresent across developers and data scientists tools and offers the best way to do secure, managed and scalable data science.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;
&lt;P&gt;&lt;EM&gt;“&lt;/EM&gt;&lt;STRONG&gt;Microsoft provides coding data scientists with all the bells and whistles.&lt;/STRONG&gt;&lt;EM&gt;”&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-right"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-right"&gt;&lt;EM&gt;- The Forrester Wave™:&amp;nbsp; Notebook-based Predictive Analytics and Machine Learning, Q3 2020&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Although we doubled down on a lot of the notebook-based capabilities, Azure Machine Learning offers even more to the developer data scientist and/or citizen data scientist community. Capabilities like &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml" target="_blank" rel="noopener"&gt;automated ML&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-designer" target="_blank" rel="noopener"&gt;designer&lt;/A&gt;, which we’ve recently made generally available, offer an experience where users can build machine learning models without knowing the intricacies of how frameworks and algorithms work. They can let the platform do the heavy lifting for them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft’s mission is &lt;STRONG&gt;to empower every person and every organization on the planet to achieve more&lt;/STRONG&gt;. With Azure Machine Learning we’re trying to accomplish this for the data science community.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Oct 2020 05:39:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/microsoft-named-a-leader-in-forrester-s-notebook-based/ba-p/1718391</guid>
      <dc:creator>Nishant Thacker</dc:creator>
      <dc:date>2020-10-02T05:39:56Z</dc:date>
    </item>
    <item>
      <title>Computer Vision for spatial analysis at the Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/ba-p/1666313</link>
      <description>&lt;P&gt;Today businesses use manual processes to understand their physical spaces and meet business requirements such as maximizing revenue for store layouts, compliance, worker safety in manufacturing plants, and more. These manual processes are occurring infrequently and through inefficient methods where employees manually count people entering stores and visually monitor social distancing requirements.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Computer Vision, an Azure Cognitive Service, is introducing the &lt;STRONG&gt;spatial analysis&lt;/STRONG&gt; feature to meet the needs of businesses across a variety of industries. Spatial analysis uses Computer Vision AI on real-time video and offers the ability to understand people’s movements in a physical space, significantly increasing efficiency and resolution of customer data. Equipped with this net new knowledge, employee time can be spent on high value experiences, to maximize return on investment, including efficient product placement, shelf stocking, sanitation, and focusing on improving the customer experience.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With AI Edge container support, spatial analysis provides the flexibility to solve current challenges around data control, privacy, and network intensive video AI workloads. Spatial analysis brings the power of the Intelligent Edge and the Azure cloud to create powerful business applications at scale.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="IntelligentEdge2.png" style="width: 800px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218230iA71E9E2DB3239907/image-size/large?v=v2&amp;amp;px=999" role="button" title="IntelligentEdge2.png" alt="Spatial analysis runs on the Intelligent Edge" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Spatial analysis runs on the Intelligent Edge&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Understanding the physical world using video is very complex, requiring deployments of video pipelines, hardware, AI models, and processing insights, all of which need to be connected into the existing infrastructure and aggregated into useful views. Organizations want simple deployments that can be hooked into their existing cameras as well as using new cameras that can be quickly installed to get immediate business value. The goal of this post is to demonstrate how enterprise applications can use the spatial analysis container on the edge, and how to use the Azure cloud to create powerful business applications.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;H2&gt;&lt;STRONG&gt;Customer Story&lt;/STRONG&gt;&lt;/H2&gt;&lt;P&gt;RXR Realty needed a way to integrate new safety measures for tenants after its buildings reopened for business during the COVID-19 pandemic. The RxWell™ solution built by RXR is a comprehensive, public-health–based, data-driven program that merges physical and digital assets to help keep employees informed and supported during the “new abnormal” and beyond. RXR leveraged the &lt;A href="https://azure.microsoft.com/services/cognitive-services/computer-vision/" target="_self"&gt;Computer Vision spatial analysis capabilities&lt;/A&gt;&amp;nbsp;of Cognitive Services to count the number of people in a specific zone and calculate the distance between each person. RxWell™ securely runs the models at the edge in Docker containers on Azure Stack Edge hardware. The resulting AI insights are sent to the cloud, where they’re used for real-time alerting and historical trending. To learn more about the RxWell™ solution with spatial analysis see &lt;A href="https://customers.microsoft.com/en-us/story/843823-rxr-realty-reopens-for-business-using-azure-iot" target="_self"&gt;this article&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Spatial analysis container for Azure IoT Edge deployments&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Computer Vision for spatial analysis is a collection of AI operations using AI models. These operations are the building blocks that enable scenarios including:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Counting people in a space for maximum occupancy&lt;/LI&gt;&lt;LI&gt;Understanding the distance between people for social distancing measures&lt;/LI&gt;&lt;LI&gt;Determining footfall such as in retail spaces&lt;/LI&gt;&lt;LI&gt;Understanding dwell time such as in front of a retail display or other designated location&lt;/LI&gt;&lt;LI&gt;Determining wait time in a queue&lt;/LI&gt;&lt;LI&gt;Determining when people are in a forbidden zone such as near industrial equipment&lt;/LI&gt;&lt;LI&gt;Determining trespassing in protected areas&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;This list portrays a few examples of how spatial analysis can be utilized in a variety of scenarios. In addition to these scenarios, real-time messaging can power customer workflows including Teams, Power Apps, Power Automate, and saved events can enable powerful reports with Power BI.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Developers, enterprise solution providers, and customers with development capabilities can use these AI operations to build edge or cloud solutions, with no Machine Learning experience required. The ready-to-use, high-quality AI Models for spatial analysis operations are trained to understand people movement across a wide variety of scenarios, camera types, angles, and lighting conditions.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Spatial analysis operations implement a real-time video pipeline to connect to new and existing RTSP cameras, including Closed Circuit TV systems. The deployment of the spatial analysis container on edge devices is facilitated by Azure IoT Hub. You're in control of your edge hardware located on your premises. The recommended edge device is Azure Stack Edge with the Nvidia T4 GPU. Other edge devices may be used if an Nvidia T4 GPU is available.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpatialAnalysis-Diagram2.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219768i967705A16FF9C6A9/image-size/large?v=v2&amp;amp;px=999" role="button" title="SpatialAnalysis-Diagram2.png" alt="Spatial analysis container deployment with Azure IoT" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Spatial analysis container deployment with Azure IoT&lt;/span&gt;&lt;/span&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;You need to install the camera devices in your physical space following the manufacturer’s instructions. The edge device hosting the container needs to be able to connect to these cameras over the RTSP protocol and stream video. Camera placement is an important step in the deployment process. For general guidelines and specific recommendations for height, angle, and camera-to-focal-point-distance, see &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-camera-placement" target="_blank" rel="noopener"&gt;Camera placement article.&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When video is streamed and processed by the spatial analysis AI models,&amp;nbsp;the container emits AI Insight events about people’s movement which in turn are sent to Azure IoT Hub as IoT telemetry. From IoT Hub you can create various routes to other Azure services and build your business solution. You may also decide to process the AI Insights locally on the edge device.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Live Video Analytics with spatial analysis&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P data-unlink="true"&gt;The spatial analysis container container can be deployed side by side with the &lt;A href="https://aka.ms/lva-spatial-analysis" target="_self"&gt;Live Video Analytics&lt;/A&gt; container. In this configuration, the Live Video Analytics&amp;nbsp;container will stream the live video from the RTSP cameras, and it will invoke spatial analysis for AI processing. The Live Video Analytics and spatial analysis containers running on Azure IoT Edge can be configured to enable rich video analysis and recording of video clips locally or to Azure Blob Storage.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="LVA-SpatialAnalysis-Diagram.png" style="width: 800px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218243i584E385773358F7C/image-size/large?v=v2&amp;amp;px=999" role="button" title="LVA-SpatialAnalysis-Diagram.png" alt="Azure IoT deployment for Live Video Analytics and spatial analysis containers" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure IoT deployment for Live Video Analytics and spatial analysis containers&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Building business applications with spatial analysis AI insights&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;You can build various business applications using the spatial analysis AI insights for people movement. Follow these &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_self"&gt;instructions&lt;/A&gt; to deploy a sample Azure Web Application that presents a live view of people counting events in a physical space. The AI insights events emitted by the spatial analysis container are being sent to Azure IoT Hub and subsequently to the Web Application, which in this case implements a visualization with a chart that updates the people count in real time as shown below. You can further modify this app with other spatial analysis operations and make modifications based on the event output of the container. When it comes to business decisions centered around social distancing requirements, determining store layouts, adding team members to reduce wait times, or improving customer satisfaction, spatial analysis makes it easy to gain these valuable insights.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SolutionApp3.1.png" style="width: 614px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218232iBAE1576B55FEFD91/image-size/large?v=v2&amp;amp;px=999" role="button" title="SolutionApp3.1.png" alt="Person count visualization in a web application" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Person count visualization in a web application&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Responsible AI &amp;amp; Innovation&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Microsoft is releasing Computer Vision for spatial analysis together with &lt;A href="https://docs.microsoft.com/legal/cognitive-services/computer-vision/responsible-use-deployment" target="_blank" rel="noopener"&gt;responsible deployment guidance&lt;/A&gt;&amp;nbsp; grounded in user and societal research.&lt;/P&gt;&lt;P&gt;Microsoft developed the Responsible Deployment recommendations by applying many of the &lt;A href="https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/" target="_blank" rel="noopener"&gt;responsible innovation best practices&lt;/A&gt; in collaboration with customers to uncover deployment recommendations for spatial analysis in accordance with &lt;A href="https://www.microsoft.com/ai/responsible-ai" target="_blank" rel="noopener"&gt;Microsoft Responsible AI Principles&lt;/A&gt;: fairness, reliability &amp;amp; safety, privacy &amp;amp; security, inclusiveness, transparency and human accountability.&lt;/P&gt;&lt;P&gt;Microsoft’s principled approach enables developers to build rich solutions while upholding the human dignity and the needs of everyone impacted by the technology.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Get Started Today&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Learn more with&amp;nbsp;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-container" target="_blank" rel="noopener"&gt;Computer Vision for spatial analysis documentation&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Follow the tutorial to&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/spatial-analysis-web-app" target="_blank" rel="noopener"&gt;Create a People Counting Web App&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Get started with &lt;A href="https://azure.microsoft.com/en-us/products/azure-stack/edge/#getting-started" target="_blank" rel="noopener"&gt;Azure Stack Edge&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Learn about &lt;A href="https://azure.microsoft.com/en-us/services/iot-hub" target="_blank" rel="noopener"&gt;Azure IoT Hub&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 22 Sep 2020 15:00:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-for-spatial-analysis-at-the-edge/ba-p/1666313</guid>
      <dc:creator>AdinaTru</dc:creator>
      <dc:date>2020-09-22T15:00:52Z</dc:date>
    </item>
    <item>
      <title>Automatically detect audio language with the Speech Language Detection Container</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/automatically-detect-audio-language-with-the-speech-language/ba-p/1694363</link>
      <description>&lt;P&gt;We are excited to announce the&lt;STRONG&gt; release of the Speech Language Detection Container for Public Preview!&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Speech Language Detection feature is used to determine the most likely language match for a given audio where the language is not already known. By doing so, it unlocks our Speech-to-Text service to a vast number of scenarios and helps eliminate the language barrier.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="katerinaprastakou_0-1600714011594.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220317i2C2C9318A3E1B5C9/image-size/large?v=v2&amp;amp;px=999" role="button" title="katerinaprastakou_0-1600714011594.png" alt="katerinaprastakou_0-1600714011594.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Since the release of Speech Language Detection as an online service on Azure Cognitive Services, we have watched you enable new scenarios along with our Speech-to-Text and Translation services that open new doors for &lt;STRONG&gt;productivity&lt;/STRONG&gt; and &lt;STRONG&gt;accessibility&lt;/STRONG&gt;. Multilingual meetings, call center conversations, voicemails, and video streams can now capture every word for captioning and analytical insights. Spoken machine translation can automatically determine the source language without manual selection.&amp;nbsp; And recommendation systems can better promote video content users can understand.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Control over your data&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Perhaps you have wanted to explore with the Speech Language Detection feature before but were limited because of data restrictions. This could be due to data regulations, not wanting to or being able to load all your data into the cloud. By using the container version, you can now use the Speech Language Detection feature with complete control over your data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Control over your throughput&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Have you had to deal with a weak network connection or disconnected environments leading to high latency? With the container, you can scale for high throughput, low latency, requirements by enabling Cognitive Services to run in Azure Kubernetes Service physically close to your application logic and data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Portable architecture&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;You can create a portable application architecture that can be deployed in the cloud, on-premises and the edge. This allows you the flexibility to add or remove containers very easily and adapt for what your project needs.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="katerinaprastakou_1-1600714011602.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220319i95C3FE413E70D7A0/image-size/large?v=v2&amp;amp;px=999" role="button" title="katerinaprastakou_1-1600714011602.png" alt="katerinaprastakou_1-1600714011602.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;How It Works&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;The process is quite simple:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Client runs container with audio file or stream&lt;/LI&gt;
&lt;LI&gt;Container receives the request&lt;/LI&gt;
&lt;LI&gt;Container runs language detection model&lt;/LI&gt;
&lt;LI&gt;Container returns to client the highest language match&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Conceptually depicted below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="katerinaprastakou_2-1600714011611.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220318iC51DC77A65C60E82/image-size/large?v=v2&amp;amp;px=999" role="button" title="katerinaprastakou_2-1600714011611.png" alt="katerinaprastakou_2-1600714011611.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The result to the client will ultimately look like this:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="katerinaprastakou_3-1600714011619.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220320i0B6A031A74347FF8/image-size/large?v=v2&amp;amp;px=999" role="button" title="katerinaprastakou_3-1600714011619.png" alt="katerinaprastakou_3-1600714011619.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Get started with installing the Speech Language Detection container&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Learn more about how to download and run Speech service containers, including Speech Language Detection by visiting our &lt;A href="https://review.docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?branch=release-ignite-2020-cog-serv-containers&amp;amp;tabs=stt%2Ccsharp%2Csimple-format" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;. Let us know your thoughts or new features you would like to see on this container!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information you can also explore:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-automatic-language-detection?pivots=programming-language-python" target="_self"&gt;Automatic language detection for speech to text&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/containers/container-faq" target="_self"&gt;Speech service containers frequently asked questions (FAQ)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/" target="_self"&gt;Overall Speech Service documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format" target="_self"&gt;Containers overall documentation in Speech Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support?tabs=luis" target="_self"&gt;Container support in Azure Cognitive Services&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://hub.docker.com/_/microsoft-azure-cognitive-services" target="_self"&gt;Docker Hub landing page&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Apr 2021 23:46:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/automatically-detect-audio-language-with-the-speech-language/ba-p/1694363</guid>
      <dc:creator>katerinaprastakou</dc:creator>
      <dc:date>2021-04-14T23:46:59Z</dc:date>
    </item>
    <item>
      <title>Computer Vision Read (OCR) API previews new languages and docker containers</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-new-languages-and-docker/ba-p/1688690</link>
      <description>&lt;H2&gt;Overview&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Businesses today are racing to convert their scanned paper documents, digital files, and even on-screen content into actionable insights. These insights power knowledge mining, business process automation, and accessibility for everyone regardless of the source of content, location of users, and the language and medium of communication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Optical Character Recognition (OCR) is the foundational technology that drives the digitization of content today by extracting text from images, documents, and screens. There are several OCR technology providers that provide this capability as services, tools, and solutions, both in the cloud and for deployment within your environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, there are several challenges to successfully implementing OCR at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Challenges&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Text extraction quality&lt;/H3&gt;
&lt;P&gt;To extract text with high accuracy from the diverse content types, formats, and mediums, your OCR should be of the highest out-of-the-box quality, work on a variety of content textures, fonts, and styles, and be easy to integrate by using cloud APIs and SDKs.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Cloud and on-premise compliance&lt;/H3&gt;
&lt;P&gt;If you are a business that serves customers in healthcare, insurance, banking, or other verticals with additional data privacy and security requirements, you typically require not just secure online access but the flexibility to deploy within your network to ensure that the personal data does not leave your network.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Mixed languages&lt;/H3&gt;
&lt;P&gt;Your customers and users are global so your OCR should also support international languages and locales. These documents most likely contain text in multiple languages that are impossible to identify as you are scanning them at scale.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Handwritten text&lt;/H3&gt;
&lt;P&gt;Finally, your documents and forms have both print and handwritten text. To combat this challenge, you should use a technology that seamlessly handles both styles of text in the same document.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Computer Vision Read (OCR)&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt;Microsoft’s Computer Vision OCR (Read)&lt;/A&gt; capability is available as a Cognitive Services Cloud API and as Docker containers. Customers use it in diverse scenarios on the cloud and within their networks to solve the challenges listed in the previous section.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following figure illustrates the high-level flow of the OCR process. The input is your document or image. The service extracts the text and converts it into a structured JSON response that includes the extracted text lines and words with their bounding boxes and confidence scores. You integrate with the service or the containers with a simple API that’s described next.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="how-ocr-works.png" style="width: 731px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220003iFB012FD3C80BFF70/image-size/large?v=v2&amp;amp;px=999" role="button" title="how-ocr-works.png" alt="OCR (Read) Cloud API overview" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR (Read) Cloud API overview&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;At its core, the OCR process breaks it down into two operations. You use the &lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Read&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; operation to submit your image or document. That starts an asynchronous process that you poll with the &lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Get Read Results&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; operation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Read operation&lt;/H3&gt;
&lt;P&gt;Call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d986960601faab4bf452005" target="_blank" rel="noopener"&gt;Read operation&lt;/A&gt; to extract the text. The call returns with a response header field called Operation-Location. The Operation-Location value is a URL that contains the Operation ID to be used in the next step.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See the full &lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/quickstarts/python-hand-text" target="_blank" rel="noopener"&gt;Read OCR REST API QuickStart in Python&lt;/A&gt;&amp;nbsp;for the following code snippets.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;text_recognition_url = endpoint + "/vision/v3.0/read/analyze"

# Set image_url to the URL of an image that you want to recognize.
image_url = "https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/cognitive-services/Computer-vision/Images/readsample.jpg"

headers = {'Ocp-Apim-Subscription-Key': subscription_key}
data = {'url': image_url}
response = requests.post(
    text_recognition_url, headers=headers, json=data)
response.raise_for_status()

# Extracting text requires two API calls: One call to submit the
# image for processing, the other to retrieve the text found in the image.

# Holds the URI used to retrieve the recognized text.
operation_url = response.headers["Operation-Location"]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get Read results operation&lt;/H3&gt;
&lt;P&gt;Call the &lt;A href="https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d9869604be85dee480c8750" target="_blank" rel="noopener"&gt;Get Read Results operation&lt;/A&gt; until it returns with a completed status. This operation takes as input the operation ID that was created by the Read operation. It returns a JSON response that contains a status field with the following possible values.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
    response_final = requests.get(
        response.headers["Operation-Location"], headers=headers)
    analysis = response_final.json()
    
    print(json.dumps(analysis, indent=4))

    time.sleep(1)
    if ("analyzeResult" in analysis):
        poll = False
    if ("status" in analysis and analysis['status'] == 'failed'):
        poll = False

polygons = []
if ("analyzeResult" in analysis):
    # Extract the recognized text, with bounding boxes.
    polygons = [(line["boundingBox"], line["text"])
                for line in analysis["analyzeResult"]["readResults"][0]["lines"]]&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;New Cloud API and Container releases&lt;/H2&gt;
&lt;P&gt;During Ignite 2020, we are announcing new cloud service and container releases.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;New features in Read 3.1 preview (cloud and container)&lt;/H3&gt;
&lt;P&gt;The new Read 3.1 preview for cloud and containers adds these capabilities:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. Support for Simplified Chinese and Japanese languages. See all &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support" target="_blank" rel="noopener"&gt;supported languages&lt;/A&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The following images show Simplified Chinese and Japanese text lines extracted respectively, along with their bounding boxes (locations).&lt;/LI&gt;
&lt;/UL&gt;
&lt;TABLE class="lia-align-left" style="height: 100%; width: 100%; border-style: none;" border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="50%"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="OCR-Read-Chinese.png" style="width: 245px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219991iD0CAC65E6BA3839D/image-dimensions/245x147?v=v2" width="245" height="147" role="button" title="OCR-Read-Chinese.png" alt="OCR (Read) Simplified Chinese" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR (Read) Simplified Chinese&lt;/span&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;TD width="50%"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="OCR-Read-Japanese.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219992i020CCB95C8462479/image-size/small?v=v2&amp;amp;px=200" role="button" title="OCR-Read-Japanese.png" alt="OCR (Read) Japanese" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR (Read) Japanese&lt;/span&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;2. Indicating whether the appearance of text is &lt;STRONG&gt;Handwriting&lt;/STRONG&gt; or &lt;STRONG&gt;Print&lt;/STRONG&gt; style, along with a confidence score (Latin languages only).&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The following image shows that the handwritten style text is not only correctly extracted but its appearance is recognized as handwriting style along with a confidence score for more granular control by the invoking business process.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="OCR-Read-print-handwritten-classify.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219993iC679003DD858B1B4/image-size/small?v=v2&amp;amp;px=200" role="button" title="OCR-Read-print-handwritten-classify.png" alt="OCR (Read) Print vs Handwritten Classification" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;OCR (Read) Print vs Handwritten Classification&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;3. Ability to extract text for selected pages or page range within multi-page documents.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The following image shows the JSON response with only the selected pages (201 and 202) when the &lt;STRONG&gt;Read&lt;/STRONG&gt; operation was called with a &lt;STRONG&gt;pages&lt;/STRONG&gt; query parameter of “201,202" for a 500 page document.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
  "status": "succeeded",
  "createdDateTime": "2020-09-08T19:23:19Z",
  "lastUpdatedDateTime": "2020-09-08T19:23:35Z",
  "analyzeResult": {
    "version": "3.1.0",
    "readResults": [
      {
        "page": 201,
        "angle": 0,
        "width": 8.2639,
        "height": 11.6944,
        "unit": "inch",
        "language": "",
        "lines": [...]
       },
      {
        "page": 202,
        "angle": 0,
        "width": 8.2639,
        "height": 11.6944,
        "unit": "inch",
        "language": "",
        "lines": [...]
       }
    ]
  }
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Read 3.1 cloud API preview&lt;/H2&gt;
&lt;P&gt;We are releasing the new &lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005" target="_blank" rel="noopener"&gt;Read 3.1 preview cloud API&lt;/A&gt; with the following features:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Support for Simplified Chinese and Japanese&lt;/LI&gt;
&lt;LI&gt;Print vs. handwriting appearance for each text line with confidence scores&lt;/LI&gt;
&lt;LI&gt;Extract text from only selected page(s) from a large multi-page document&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="cognitive-services.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219996i3C4E008D97CF3571/image-size/small?v=v2&amp;amp;px=200" role="button" title="cognitive-services.png" alt="MIcrosoft Cognitive Services" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;MIcrosoft Cognitive Services&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;See the &lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/whats-new#september-2020" target="_blank" rel="noopener"&gt;Read API overview&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt; to learn more.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;Read 3.0 and Read 3.1 container previews&lt;/H2&gt;
&lt;P&gt;We are announcing two new container releases.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="docker-containers.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219997iB5A696CDC2F85234/image-size/small?v=v2&amp;amp;px=200" role="button" title="docker-containers.png" alt="docker containers" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;docker containers&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;The Read 3.0 container preview&lt;/H3&gt;
&lt;P&gt;The Read 3.0 container preview is the on-premise version of the &lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt;Read 3.0 Cloud API&lt;/A&gt; that’s generally available (GA) today.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Read 3.0 container preview is a significant upgrade to the Read 2.0 container preview that’s available today. Major features include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Enhanced accuracy based on updated deep learning models&lt;/LI&gt;
&lt;LI&gt;Support for Dutch, English, French, German, Italian, Portuguese, and Spanish&lt;/LI&gt;
&lt;LI&gt;Support for multiple languages within the same document&lt;/LI&gt;
&lt;LI&gt;Single operation for both documents and images&lt;/LI&gt;
&lt;LI&gt;Support for significantly larger documents and images&lt;/LI&gt;
&lt;LI&gt;Confidence scores with a full range of 0 to 1 instead of labels (“low” only), , for granular visibility.&lt;/LI&gt;
&lt;LI&gt;Support for mixed model documents with print and handwritten style text&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;The Read 3.1 container preview&lt;/H3&gt;
&lt;P&gt;The Read 3.1 container preview is the on-premise version of the same Read 3.1 API preview features we reviewed in the previous section. Moving forward, Read 3.1 and newer versions will add expanded language coverage and enhancements.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Read 3.1 container preview includes everything that Read 3.0 has and adds the following additional capabilities covered in the previous section:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Support for Simplified Chinese and Japanese&lt;/LI&gt;
&lt;LI&gt;Print vs. handwriting appearance for each text line with confidence scores&lt;/LI&gt;
&lt;LI&gt;Extract text from only selected page(s) from a large multi-page document&lt;/LI&gt;
&lt;LI&gt;Unified code and architecture ensures the container will be in step with cloud API releases in the future.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Get started with containers&lt;/H3&gt;
&lt;P&gt;Learn how to&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;install and run the Read containers&lt;/A&gt; to get started and find the recommended configuration settings. If you are using Read 2.0 containers today, a migration guide is available for help along the way.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;When to deploy which version&lt;/H3&gt;
&lt;P data-unlink="true"&gt;Can’t wait? Deploy the Read 3.0 container&amp;nbsp; preview knowing that this release is on track for general availability (GA) soon. Want more languages and enhancements? Deploy the Read 3.1 container&amp;nbsp; preview if you can wait a tad longer for more features.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://go.microsoft.com/fwlink/?linkid=2140367" target="_blank" rel="noopener"&gt;Deploy the Read container of your choice&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What customers say&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Instabase&lt;/H3&gt;
&lt;P&gt;Instabase is a technology platform for business productivity applications that can be deployed in the cloud or on-premises.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="instabase-logo.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219998i233FEBBA415E7C78/image-size/medium?v=v2&amp;amp;px=400" role="button" title="instabase-logo.png" alt="Instabase logo" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Instabase logo&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;"Microsoft's Read OCR container technology provides a powerful option that our customers can leverage to read text from documents, which Instabase uses to generate understanding, without data ever leaving their firewall, ensuring data privacy and security. This is essential for our banking and enterprise customers." - &lt;EM&gt;&lt;STRONG&gt;Justin Herlick, Product Manager, Instabase&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;GE Aviation&lt;/H3&gt;
&lt;P&gt;While mandatory for regulatory compliance, assembling complete back-to-birth aircraft maintenance record is an expensive, time-consuming, and unreliable process. GE Aviation’s Digital Solutions Group built the AirVault solution to solve the challenge.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ge-aviation-logo.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219999i83EC9232DD5881E1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="ge-aviation-logo.png" alt="GE Aviation logo" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;GE Aviation logo&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;“It can be a huge task across an airline's entire fleet to record and then easily retrieve evidence of maintenance activity or compliance to an Airworthiness Directive. A lot of what gets archived, things like parts receipts, vendor airworthiness certificates or maintenance records are still paper-based, and these documents can contain both handwriting as well as printed text. Microsoft’s Computer Vision OCR technology helped us to greatly enhance our full text word search capability during the conversion of paper documents to digital format as well as documents that were never printed in the first place with scale, speed, and accuracy.” -&lt;STRONG&gt; &lt;EM&gt;Nate Hicks, Sr. Product Group Leader at GE Aviation Digital Solutions&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/YknxjEe779c?rel=0" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/YknxjEe779c/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;NHS Business Services Authority&lt;/H3&gt;
&lt;P&gt;UK’s NHS Business Services Authority (NHS BSA) is a Special Health Authority and an arm's length body of the Department of Health and Social Care (DHSC).&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nhs-business-authority-logo.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220000iC18FE18AEBD411C3/image-size/medium?v=v2&amp;amp;px=400" role="button" title="nhs-business-authority-logo.png" alt="NHS Business Authority logo" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;NHS Business Authority logo&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;“We believe that we can do so much more by using AI to read our documentation, to read more fields on that, and to read handwritten info, and to use that AI engine to deliver better taxpayer value, to deliver better outcomes, and deliver better patient safety.” -&lt;STRONG&gt;&lt;EM&gt; Michael Brodie, Chief Executive, NHS BSA&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://customers.microsoft.com/en-us/story/825757-nhsbsa" target="_blank" rel="noopener"&gt;Read the case study&lt;/A&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/free/cognitive-services/" target="_blank" rel="noopener"&gt;Create a Computer Vision resource&lt;/A&gt; in Azure and follow one of our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/csharp-hand-text" target="_blank" rel="noopener"&gt;QuickStarts&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/concept-recognizing-text" target="_blank" rel="noopener"&gt; OCR (Read)&lt;/A&gt; and &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Learn more about the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-how-to-install-containers" target="_blank" rel="noopener"&gt;Read containers&lt;/A&gt; and download them from Docker Hub.&lt;/LI&gt;
&lt;LI&gt;Write to us at &lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 15 Mar 2021 00:20:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/computer-vision-read-ocr-api-previews-new-languages-and-docker/ba-p/1688690</guid>
      <dc:creator>sanjeev_jagtap</dc:creator>
      <dc:date>2021-03-15T00:20:11Z</dc:date>
    </item>
    <item>
      <title>Accelerate self-paced learning at the edge with Speech Containers</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-self-paced-learning-at-the-edge-with-speech/ba-p/1636986</link>
      <description>&lt;P class="lia-align-justify"&gt;We are pleased to announce that &lt;STRONG&gt;Speech to Text&lt;/STRONG&gt; and &lt;STRONG&gt;Text to Speech&lt;/STRONG&gt; containers from Azure Cognitive Services are now &lt;STRONG&gt;Generally Available (GA)&lt;/STRONG&gt;. Using these containers, customers can build a speech application architecture that is optimized for both robust cloud capabilities and edge locality.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With Speech to Text in containers, businesses across industries have unlocked new productivity gains and insights by enabling real-time and batch transcription of audio streams into text. With Text to Speech customers can enable applications, tools, or devices to convert text into human-like synthesized speech.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="speech.jpg" style="width: 930px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216647iF306FCF20160C51E/image-size/large?v=v2&amp;amp;px=999" role="button" title="speech.jpg" alt="speech.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Organizations ranging from banking, telecom, aerospace and defense leverage speech containers to solve great business needs including: call center transcription &amp;amp; analytics, self-paced learning tools, and intelligent kiosks. Azure is the only cloud provider enabling customers with full flexibility of running artificial intelligence on their own terms, whether on-premises or or at the edge.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The &lt;STRONG&gt;goal&lt;/STRONG&gt; of this post is to show how our customers leverage containers to solve AI needs at the edge.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&lt;A href="https://www.airbus.com/" target="_blank" rel="noopener"&gt;Airbus&lt;/A&gt;&lt;/STRONG&gt; is an international leader in the aerospace sector.&amp;nbsp;They design, manufacture and deliver industry-leading commercial aircraft, helicopters, military transports, satellites and launch vehicles, as well as providing data services, navigation, secure communications, urban mobility and other solutions for customers on a global scale.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With Azure Cognitive Services, Airbus advances its aerospace operations, specifically their pilot training chatbots to harness the speech capabilities to engage and educate pilot staff. By integrating Azure AI speech and transcription capabilities, Airbus was able to engage and educate pilot staff with most up to date detail and safe practices. Watch this video for more details:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;U&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyoutu.be%2FQRprKorsDFQ&amp;amp;data=02%7C01%7CPhani.Mutyala%40microsoft.com%7Cdde1458d0ba847cb301208d85a89f444%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637358895844536244&amp;amp;sdata=Pu7iQRnxcXeod9AhgMfG6AZADShzIFUvrQhLvlpQ2C0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;&lt;LI-VIDEO vid="https://youtu.be/QRprKorsDFQ" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/QRprKorsDFQ/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/A&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;The Problem and Customer Pain Point&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Airbus trains tens of thousands of commercial aircraft and military pilots annually. The customer pain point in pilot training is driven by the complexity of modern commercial and military aircraft. In recent years aircraft complexity has increased at such a rate that the scope of knowledge required for operating aircraft reliably and safely has increased exponentially.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The volume of training material amounting to a pilot training course content is rapidly approaching levels which are becoming difficult for trainees to retain with acceptable levels of recall and accuracy.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="lia-align-justify"&gt;How bad is the pain?&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The average pilot conversion course for an experienced pilot converting to a new aircraft platform exceeds more than 7000 pages of printed documentation. This content must be reviewed, committed to memory and recalled with very high levels of accuracy not only during the 10 to 12 week duration of the conversion training course, but throughout the entire operational life of the pilot. The Airbus pilot training chatbot has been developed on an enterprise chatbot platform and is being enhanced with Azure speech service capabilities.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="speech2.jpg" style="width: 937px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216648i0CC3B8EBF2D233CB/image-size/large?v=v2&amp;amp;px=999" role="button" title="speech2.jpg" alt="speech2.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;STRONG&gt;Pilot training chatbot and how it is integrated with Azure speech containers&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The objective of the pilot training chatbot is to provide pilot trainees with an alternative method for review, revision and self-paced learning. The pilot training chatbot is not designed to replace human flight instructors but rather looks to extend the coverage and access to their existing instructor knowledge base and supplements already developed standardized training methods. The chatbot is used to test knowledge areas for recall and accuracy.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Technical Challenges -&amp;nbsp;&lt;/STRONG&gt;The technical challenge for this project was to not depend on any public cloud services as, although initially focusing on civil aircraft, the projects aims to support military and governmental aircraft types as well.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This requires a disconnected and, in some &lt;FONT size="3"&gt;cases,&lt;/FONT&gt; air gapped style of deployment. The heart of the chatbot is implemented in an on premise enterprise chatbot platform. It has a powerful conversation engine; however, but doesn't include speech technologies. However these can be integrated into the conversation using the APIs of speech technology services.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The challenge to integrate a voice interface was addressed using the Cognitive Services (speech) containers. The containers have been deployed on a kubernetes cluster running in a secured environment, ensuring flexibility and ease of deployment.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The chatbot connects via API to the container using the Speech SDK. It forwards the user's speech input to convert to text output using the Azure on premise speech to text container and responds to the user in either text or voice, depending on the user’s settings. In case the user chooses the full speech mode it will vocalize answers by using the Text to Speech API container to receive an audio file, which is sent to the user’s device for playback. Since all APIs are in the same environment, latency is no issue and the interaction feels natural and fast.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The following graph gives a short overview of the communication flow:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2020-09-17 103010.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/219225iCC57D00E6B4F5E5C/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2020-09-17 103010.jpg" alt="Screenshot 2020-09-17 103010.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;STRONG&gt;Solution Overview -&amp;nbsp;&lt;/STRONG&gt;Speech to Text Integration&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The UI for the pilot training chatbot is a JavaScript Web UI incorporating HTML chat window display and controls. Once the Speech to Text container was deployed within the Kubernetes environment, the next step was to integrate Speech to Text into the chatbot Web UI.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For this step, the sample JavaScript code from the &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;Azure Cognitive Service Speech SDK&lt;/A&gt; proved to be a really useful resource. It provides code templates for Airbus to update and get started with connectivity to the STT Container, using the &lt;A href="https://docs.microsoft.com/en-us/javascript/api/overview/azure/speech-service?view=azure-node-latest" target="_blank" rel="noopener"&gt;Microsoft Speech SDK JavaScript library&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Once connectivity to the STT Kubernetes service was established, configured and tested with the STT settings and Speech SDK, all the required functions transferred directly into the chatbot JavaScript code. Utilizing the Speech SDK examples for the initial testing and configuration before integration into the target UI really saved time to achieve the final goal of integration of Speech capabilities for the chatbot.&lt;/P&gt;
&lt;DIV id="tinyMceEditorPhani_Mutyala_4" class="mceNonEditable lia-copypaste-placeholder lia-align-justify"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="speech4.jpg" style="width: 953px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216653iC2771874ECDE449F/image-size/large?v=v2&amp;amp;px=999" role="button" title="speech4.jpg" alt="speech4.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The &amp;nbsp;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;recognizeonceAsync&lt;/EM&gt; &lt;/FONT&gt;function was used as an example from the &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/browser/index.html" target="_blank" rel="noopener"&gt;sample&lt;/A&gt;, as it requires a single utterance transcription, after which to stops to await reply from the Pilot Training chatbot.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The continuous &lt;FONT color="#000000"&gt;&lt;EM&gt;recognitionAsync&lt;/EM&gt; &lt;/FONT&gt;function was not required but can be used if transcribing continuously until explicitly stopped is needed, e.g. dictation uses cases.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;reco.recognizeOnceAsync(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; function (result) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; window.console.log(result);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += "(continuation) Reason: " + SpeechSDK.ResultReason[result.reason];
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;switch (result.reason) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; case SpeechSDK.ResultReason.RecognizedSpeech:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += " Text: " + result.text;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; break;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; case SpeechSDK.ResultReason.NoMatch:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var noMatchDetail = SpeechSDK.NoMatchDetails.fromResult(result);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += " NoMatchReason: " + SpeechSDK.NoMatchReason[noMatchDetail.reason];
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; break;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; case SpeechSDK.ResultReason.Canceled:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var cancelDetails = SpeechSDK.CancellationDetails.fromResult(result);
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += " CancellationReason: " + SpeechSDK.CancellationReason[cancelDetails.reason];
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (cancelDetails.reason === SpeechSDK.CancellationReason.Error) {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += ": " + cancelDetails.errorDetails;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; break;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; statusDiv.innerHTML += "\r\n";
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; phraseDiv.innerHTML = result.text + "\r\n";
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; sdkStopRecognizeOnceAsyncBtn.click();&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;Text to Speech Integration&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Airbus utilized the Azure Cognitive Service Speech SDK to get started with &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Text to Speech&lt;/A&gt; as soon as the container was deployed on their Kubernetes Infrastructure. The team selected &lt;EM&gt;Hazel UK&lt;/EM&gt; as the voice for the initial tests with speech synthesis due to the clarity of pronunciation, but neural speech synthesis is of huge interest to try the future.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For the speech synthesis within the Chat UI, the application calls the Text to Speech container’s Rest API directly without using the Speech SDK library, as it requires fewer configurations for the service call in comparison to Speech to Text.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For longer responses from the chatbot (more than 100 characters they set a lower prosody rate of 0.9 ( &lt;FONT color="#0000FF"&gt;&amp;lt;prosody rate="0.9"&amp;gt;&lt;/FONT&gt; ). This is so as to ensure the response is not too fast for the Pilot, enabling them to have time to process the response from the chatbot. In addition, we replace end of sentence full stops and commas with longer pauses (&lt;FONT color="#0000FF"&gt;'&amp;lt;break time="600ms"/&amp;gt;'&lt;/FONT&gt;). Again this allows the Pilot sufficient time to process the reply of the chatbot.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For example, &lt;U&gt;the sentence&lt;/U&gt;: &lt;EM&gt;“MTOW is an abbreviation for Maximum Takeoff Weight, which defines the maximum weight at which a pilot is allowed to attempt to take off, due to structural or other limits.”&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Would translate to:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;EM&gt;“&lt;FONT color="#0000FF"&gt;&amp;lt;say-as interpret-as="characters"&amp;gt;&lt;/FONT&gt;MTOW&lt;FONT color="#0000FF"&gt;&amp;lt;/say-as&amp;gt;&lt;/FONT&gt;&amp;nbsp;is an abbreviation for Maximum Takeoff Weight, &lt;/EM&gt;&lt;FONT color="#0000FF"&gt;&lt;EM&gt;&amp;lt;break time="600ms"/&amp;gt; &lt;/EM&gt;&lt;/FONT&gt;&lt;EM&gt;which defines the maximum weight at which a pilot is allowed to attempt to take off, due to structural or other limits.”&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The word “MTOW” is spoken as individual letters and an empathized pause is included after the first comma.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Measuring key results&lt;/FONT&gt; -&lt;/STRONG&gt; As an initial launch platform, the Pilot Training chat bot is targeting the Airbus A330 MRTT type, a military tanker refueling version of the standard Airbus A330 airliner.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Results and feedback have been extremely positive with both the instructor and trainee communities eagerly offering recommendations for content enhancements to supplement existing course material. Users feel the &lt;SPAN&gt;combination of AI and speech capability will bring about an enhanced learning experience, making for more efficient aircraft operation and safer skies.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;FONT size="5"&gt;Takeaways from this solution implementation&lt;/FONT&gt;&lt;/H2&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Azure Speech SDK really accelerated the integration of speech technologies with the pilot training chatbot, saving many hours of development. Documentation and developer samples are easy to follow and are available in a variety of programming languages.&lt;/LI&gt;
&lt;LI&gt;Standard speech to text containers brings both accuracy and the speed of inference.&lt;/LI&gt;
&lt;LI&gt;Speech container’s pronunciation is clear and professional.&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Containers allowed Airbus to use the latest technology with reasonable effort in even the strictest and most regulated environments, boosting their&amp;nbsp;time to market and providing a clear &lt;STRONG&gt;differentiator&lt;/STRONG&gt; compared to other AI providers.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;STRONG&gt;Get Started...! Learn more about speech containers and deploy solutions at the edge&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify" data-unlink="true"&gt;Deploying your first container is about a 2-minute read, you basically create a resource at Azure portal, download image, run container with environmental variables. Here's a&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format" target="_self"&gt;document&lt;/A&gt;&amp;nbsp;to help you get started on running containers.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Containers available from Azure Speech Service are:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="speech6.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216778i7955C71630445018/image-size/large?v=v2&amp;amp;px=999" role="button" title="speech6.jpg" alt="speech6.jpg" /&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;STRONG&gt;Speech Service Documentation&lt;/STRONG&gt;&lt;/H2&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/" target="_self"&gt;Azure Speech Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk?tabs=windows%2Cubuntu%2Cios-xcode%2Cmac-xcode%2Candroid-studio" target="_self"&gt;Speech SDK&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format" target="_self"&gt;Speech Containers&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Cognitive Services containers&lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;Get Started, learn more and take advantage of Azure Cognitive Services containers to build intelligent applications today and learn more.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-align-justify"&gt;&lt;A href="https://aka.ms/cscontainers" target="_self"&gt;&lt;SPAN&gt;Containers documentation&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="lia-align-justify"&gt;&lt;A href="https://aka.ms/cscontainers-faq" target="_self"&gt;&lt;SPAN&gt;Containers FAQ&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI class="lia-align-justify"&gt;&lt;A href="https://hub.docker.com/_/microsoft-azure-cognitive-services" target="_self"&gt;&lt;SPAN&gt;Docker hub landing page&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 28 Sep 2020 16:31:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-self-paced-learning-at-the-edge-with-speech/ba-p/1636986</guid>
      <dc:creator>Phani_Mutyala</dc:creator>
      <dc:date>2020-09-28T16:31:01Z</dc:date>
    </item>
    <item>
      <title>Introducing Metrics Advisor - A new Cognitive Service</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-metrics-advisor-a-new-cognitive-service/ba-p/1668025</link>
      <description>&lt;P&gt;It is key to stay on top of the status of the physical assets, products, services, and business through data intelligence for companies and organizations which are embracing digital transformation. The way they are doing this is by extracting the key metrics which are proxies to those assets and monitoring the metrics 24X7. And if there is anything wrong detected, they would like to know immediately and act on that to prevent the small issues from becoming customer-impacting incidents. This becomes difficult when the data volume is huge, therefore identifying objects, groups of objects, events, or event patterns that deviate from the expected or norm with scale is needed.&lt;/P&gt;
&lt;P&gt;We are pleased to announce the preview of the Metrics Advisor, part of Azure Cognitive Services to address the need for metrics intelligence. The service ingests data from various sources, using machine learning to automatically find anomalies from sensors, products, and business metrics, and provides diagnostics insights. Metrics Advisor goes beyond simple Anomaly Detection by providing developers an out-of-the-box platform of multi-dimensional metric data ingestion, anomaly detection, and automatic model customization through user feedback powered by reinforcement learning. The capabilities of the pipeline can be easily used by developers to build predictive maintenance, AIOps (artificial intelligence for IT operations), and business metric monitoring solutions.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 565px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218342i434F524228A9AD4C/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Overview&lt;/H2&gt;
&lt;P&gt;Metrics Advisor leverages sophisticated mathematical techniques — machine learning and other advanced analytics — to precisely detect more-subtle anomalies, provide earlier notice of likely future anomalies, and streamline the design and development of systems that detect (and even act on) anomalies. Built on the Anomaly Detector, it includes the capabilities to ingest data from various standard data sources, use the data to build out models, model tuning, and feedback-based model customization behind the scenes. Last but not least root cause analysis with advanced insights &amp;amp; recommend actions.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218344i4F1B56E3C6668374/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;What Can You Do with Metrics Advisor&lt;/H2&gt;
&lt;P&gt;Let me show you what kind of problems you can solve with the Metrics Advisor. Imagine you are the someone responsible for company Contoso’s e-commerce website. To ensure both the business and services are in good health, many important business metrics and infrastructure metrics are generated and onboarded to Metrics Advisor, e.g., DAU (daily active users), CPU usage, web page latency, database throughput…&lt;BR /&gt;At a given date, DAU anomaly was detected on the all-up metric aggregated by all regions and all channels and you got a notification.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 624px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218345i017C1FCCB379C436/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;At this moment, automated diagnostics info was available on the portal or via APIs. It turned out that the leading contributors to this anomaly were from the region of the United States and the channel of Direct. So the investigation and mitigation should focus on those areas to start with.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture4.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218346iF7667F23E693E1E4/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture4.png" alt="Picture4.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Were there any other underlying issues you should look into? By checking out the Metrics graph which depicts the metrics dependency across the infrastructure metrics, it became obvious that the MySQL problem caused the latency issue of the web app, which propagated to impact the DAU of specific region and channel.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture5.png" style="width: 481px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218347i550826DDBF6AD114/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture5.png" alt="Picture5.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;You as the engineering lead of the Contoso e-commerce website, can easily get automated insight within a few minutes and identify the potential root cause. All the operations can be done via portal or APIs if you would like to embed this capability into your org’s own experience. After Ignite 2020, we will launch SDKs to ease your coding with the APIs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Magic Behind the Scenes&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Model selection framework &amp;amp; tuning&lt;/H3&gt;
&lt;P&gt;Firstly, the time-series anomaly detection task is challenging because of the complex characteristics of time-series, which are messy, stochastic, and often without proper labels. This prohibits training supervised models because of lack of labels and a single model hardly fits different time series. &lt;BR /&gt;We present an automated model selection framework to automatically find the most suitable detection model with proper parameters for the incoming data. The model selection layer is extensible as it can be updated without too much effort when a new detector is available to the service. Finally, we incorporate a customized tuning algorithm to flexibly filter anomalies to meet customers’ criteria. Experiments on real-world datasets show the effectiveness of our solution.&lt;BR /&gt;As shown in the pipeline below, the incoming series is first processed by a set of transformations and feature extractors. Then in the automated model selection phase, Model Selector takes the extracted features as input and outputs the anomaly detection model that best fits the input data. Each anomaly detection model is associated with a Parameter Estimator, which is used to compute related parameters. Next, our service uses the selected model and its corresponding parameters to detect anomalies of the input data and obtains a preliminary anomaly detection result. Lastly, tuned parameters are applied to obtain a customized anomaly detection result.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture6.png" style="width: 558px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218348iAA37A847ADD140FA/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture6.png" alt="Picture6.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;Model customization through user feedback&lt;/H3&gt;
&lt;P&gt;Tuning is one way to customize the model to users’ business and dataset. We are also leveraging user feedback as human knowledge to adapt the model/parameters to better serve and fit customers’ data and business.&lt;/P&gt;
&lt;P&gt;The motivation is that different customers have different service patterns and anomaly definitions. It is complicated for customers to tune the model directly to be adapted to their scenarios. To solve this problem, we provide a feedback mechanism in the Metrics Advisor. Customers can obtain more accurate detection results by providing confirmation on these results which are fed into an end-to-end framework based on Reinforcement Learning (RL) to learn customer feedbacks.&lt;/P&gt;
&lt;H3&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Pic8.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218685i4CAA93D0F7224123/image-size/large?v=v2&amp;amp;px=999" role="button" title="Pic8.png" alt="Pic8.png" /&gt;&lt;/span&gt;&lt;/H3&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Adaptive root cause analysis&lt;/H3&gt;
&lt;P&gt;Root cause analysis consists of two major processes,&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Automation part&lt;/STRONG&gt;: From the nature of the metrics, such as hierarchy and distribution, using machine learning to generate an analysis report to find out the most likely root causes in an incident.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Online Learning part&lt;/STRONG&gt;: From the data topology, feedbacks, and interactions of the incident owner, inference the severity of an incident to make sure the alerts are actionable and reduce ignorable information in analysis reports.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The learning part will ingest strategies into the automation part to improve the quality of the analysis report and alerts.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture7.png" style="width: 480px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/218350iC98FAF59F5EF0407/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture7.png" alt="Picture7.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;While there are a bunch of automation technologies are available for root cause analysis, Metrics Advisor takes customization into account. Because the Metrics Advisor is a general cloud service but customers’ scenarios are diverse, similar incidents might mean differently for different customers, or for the same service in different stages. The capabilities of learning from customers’ feedbacks and implicitly being tuned to be more actionable are the biggest advantage of Metrics Advisor. To implement that, the Metrics Advisor creatively combines online learning technologies and incident representation to achieve a self-evolvable Root Cause Analysis.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Get Started&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Learning more with our &lt;A href="https://go.microsoft.com/fwlink/?linkid=2141501" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://go.microsoft.com/fwlink/?linkid=2142156" target="_self"&gt;Create a free Metrics Advisor resource&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://go.microsoft.com/fwlink/?linkid=2141630" target="_self"&gt;Onboard your first dataset with our QuickStarts&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://dlp-kdd.github.io/assets/pdf/a17-ying.pdf" target="_self"&gt;Automated Model Selection for Time-Series Anomaly Detection&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798" target="_self"&gt;Overview of SR-CNN&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162" target="_self"&gt;Introducing Anomaly Detector&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 22 Sep 2020 14:40:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-metrics-advisor-a-new-cognitive-service/ba-p/1668025</guid>
      <dc:creator>Tony_Xing</dc:creator>
      <dc:date>2020-09-22T14:40:56Z</dc:date>
    </item>
    <item>
      <title>Ignite 2020 Neural TTS updates: new language support, more voices and flexible deployment options</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544</link>
      <description>&lt;H1&gt;Ignite 2020 Neural Text-to-Speech updates: new language support, more voices and flexible deployment options&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post was co-authored by Garfield He, Melinda Ma, Yueying Liu and Yinhe Wei&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&amp;nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;close to human-parity&lt;/A&gt;. &amp;nbsp;Since its launch, we have seen it widely adopted in a variety of scenarios by many Azure customers, from voice assistants to audio content creation. We continue to push the envelope to enable more developers to add natural-sounding voices to their applications and solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are happy to announce a series of updates to Neural TTS that extends its reach globally and allows developers to deploy it anywhere the data resides. This includes new languages available, new voices with rich personas, and on-prem deployment through docker containers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;18 new &lt;EM&gt;languages/locales&lt;/EM&gt; supported &lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS has now been extended to support 18 new &lt;EM&gt;languages/locales.&lt;/EM&gt;&amp;nbsp;They are&amp;nbsp;Bulgarian, Czech, German (Austria),&amp;nbsp; German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can hear samples of these voices below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="599px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;bg-BG&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Bulgarian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Kalina&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Архитектурното културно наследство в България е в опасност.&amp;nbsp;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release Blog Samples/bg-BG Kalina.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;cs-CZ&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Czech&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Vlasta&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Policisté většinou chodí v uniformě a jsou označeni hodnostmi.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/cs-CZ%20Vlasta.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;de-AT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;German (Austria)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Ingrid&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Ab Herbst werden Lehrer, die sich dafür interessieren, eigens ausgebildet.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/de-AT%20Ingrid.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;de-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;German (Switzerland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Leni&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Dreizehn Millionen Liter mehr als im Vorjahr.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/de-CH%20Leni.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;el-GR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Greek&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Athina&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Για να βρεις ποιος σε εξουσιάζει, απλώς σκέψου ποιος είναι αυτός που δεν επιτρέπεται να κριτικάρεις .&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/el-GR%20Athina.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;en-IE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;English &amp;nbsp;(Ireland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Emily&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Now we have seventy members and two dragon boats.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/en-IE%20Emily.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;fr-CH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;French (&lt;SPAN&gt;Switzerland&lt;/SPAN&gt;)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Ariane&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Chaque équipe jouera donc 5 matchs de 20 minutes dans sa poule.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/fr-CH%20Ariane.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;he-IL&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Hebrew (Israel)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Hila&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;הכל פתוח במאבק על המקום האחרון לפלייאוף העליון של ליגת העל בכדורגל.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/he-IL%20Hila.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;hr-HR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Croatian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Gabrijela&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Idemo na pobjedu u Maksimiru, pred našem publikom dat ćemo sto posto.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/hr-HR%20Gabrijela.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;hu-HU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Hungarian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Noemi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;A macska felmászott a tetőre és leugrott.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/hu-HU%20Noemi.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;id-ID&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Indonesian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Ardi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Inflasi dapat digolongkan menjadi empat golongan, yaitu inflasi ringan, sedang, berat, dan hiperinflasi.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/id-ID%20Ardi.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;ms-MY&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Malay&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Yasmin&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Beg berkenaan dibawa ke hospital untuk menjalankan proses pengenalan.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/ms-MY%20Yasemin.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;ro-RO&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Romanian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Alina&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Temperaturile maxime se vor încadra între 15 şi 23 de grade Celsius.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/ro-RO%20Alina.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;sk-SK&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Slovak&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Viktoria&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Kúzelné miesta nájdete aj za jej hranicami, v malebnej prírode.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/sk-SK%20Viktoria.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;sl-SI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Slovenian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Petra&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Predlagani zakon vključuje tudi načrt nadaljnjega ukrepanja.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/sl-SI%20Petra.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;ta-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Tamil&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Pallavi&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;உச்சிமீது வானிடிந்து வீழுகின்ற போதினும், அச்சமில்லை அச்சமில்லை அச்சமென்பதில்லையே&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/ta-IN%20Pallavi.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;te-IN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Telugu&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;Shruti&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;అందం ముఖంలో ఉండదు. సహాయం చేసే మనసులో ఉంటుంది&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/te-IN%20Shruti.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="70.5px"&gt;
&lt;P&gt;vi-VN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109px"&gt;
&lt;P&gt;Vietnamese&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="79px"&gt;
&lt;P&gt;HoaiMy&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="269.5px"&gt;
&lt;P&gt;Hà Nội là thủ đô của Việt Nam.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/vi-VN%20HoaiMy.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these new voices, Microsoft Azure Neural TTS supports &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;49 languages/locales&lt;/A&gt; in total.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;14 additional &lt;EM&gt;voices&lt;/EM&gt; released to enrich the variety&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Customers use TTS for different scenarios and their requirements for voice personas can vary. To provide more options to developers, we continue to create more&amp;nbsp;&lt;EM&gt;voices&lt;/EM&gt; in each language. Besides the extension to support new locales, we’ve announced 14 new voices to enrich the variety in the existing languages.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear samples of these voices below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="624px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;&lt;STRONG&gt;Language &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;&lt;STRONG&gt;Gender&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;de-DE&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;German&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Conrad&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Je würziger das Fleisch, desto würziger und kräftiger sollte auch der Wein sein.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/de-DE%20Conrad.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;en-AU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;English (Australia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;William&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;They have told me nothing, and probably cannot tell me anything to the purpose.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/en-AU%20William.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;en-GB&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;English &amp;nbsp;(UK)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Ryan&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Today’s temperature was a record 26.5 degrees Celsius.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/en-GB%20Ryan.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;en-US&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;English (US)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Jenny&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;For example, we place a session cookie on your computer each time you visit our Website.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/en-US%20Jenny.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;es-ES&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Spanish (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Alvaro&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Dos helicópteros medicalizados tuvieron que acudir al lugar a rescatar a los heridos.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/es-ES%20Alvaro.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;es-MX&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Spanish (Mexico)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Jorge&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;El niño mencionó que si pudiera caminar, pediría un balón para poder patearlo o una cuerda para poder saltar.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/es-MX%20Jorge.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;fr-CA&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;French &amp;nbsp;(Canada)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Jean&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Ce jour tant attendu arrive enfin!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/fr-CA%20Jean.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;fr-FR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;French (France)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Henri&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Jusqu'ici, nous vous avons toujours fait confiance et accordé le bénefice du doute.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/fr-FR%20Henri.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;it-IT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Italian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Isabella&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;I gel igienizzanti sono aumentati di prezzo.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/it-IT%20Isabella.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;it-IT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Italian&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Diego&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;Domani preparerò dei biscotti con le gocce di cioccolato.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/it-IT%20Diego.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;ja-JP&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Japanese&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Keita&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;キャッシュレス決済を利用して、支払いを簡単にする。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/ja-JP%20Keita.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;ko-KR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Korean&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;InJoon&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;규모가 더욱 확대되었다.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/ko-KR%20Injoon.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;pt-BR&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Portuguese (Brazil)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Male&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Antonio&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;O que você quer ganhar de presente de natal?&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/pt-BR%20Antonio.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="68px"&gt;
&lt;P&gt;th-TH&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="105px"&gt;
&lt;P&gt;Thai&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="76px"&gt;
&lt;P&gt;Female&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110px"&gt;
&lt;P&gt;Premwadee&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="265px"&gt;
&lt;P&gt;วิกฤตแบบนี้บริษัทยิ่งต้องการคนที่พร้อมเผชิญปัญหา&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/garhe/Release%20Blog%20Samples/th-TH%20Premwasdee.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, Microsoft Azure Text-to-Speech service offers &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;68 neural voices&lt;/A&gt;.&amp;nbsp; Hear all these neural voices saying 'Thank you' in 49 languages/locales in the video below.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/p4sknif1zJc" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/p4sknif1zJc/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Across standard and neural TTS capabilities, we now offer 140+ voices in total. Check the 70+ &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;standard voices&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;More than 15 speaking styles available in en-US and zh-CN voices&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we’re building upon our Neural TTS capabilities in English (US) and Chinese (CN) with new voice styles. By default, the Text-to-Speech service synthesizes text using a neutral speaking style. With neural voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscasting and voice assistant that fit your need.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the English (US) new voice, Jenny, which is created with a friendly, warm and comforting voice persona focusing on conversational scenarios, we provide additional speaking styles including chatbot, customer service, and assistant.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can hear the different speaking styles in Jenny’s voice below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="623"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Style&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;&lt;STRONG&gt;Style description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="270"&gt;
&lt;P&gt;&lt;STRONG&gt;Sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;General&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;Expresses a neutral tone and available for general use&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="270"&gt;
&lt;P&gt;Valentino Lazaro scored a late winner for Austria to deny Northern Ireland a first Nations League point.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/Jenny%20General.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Chat&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;Expresses a casual and relaxed tone in conversation&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="270"&gt;
&lt;P&gt;Oh, well, that's quite a change from California to Utah.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/Jennychat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Customer service&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;Expresses a friendly and helpful tone for customer support&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="270"&gt;
&lt;P&gt;Okay, great.&amp;nbsp; In the meantime, see if you can reach out to Verizon and let them know your issue. And Randy should be calling you back shortly.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/Jenny%20CustomerService.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;Assistant&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="198"&gt;
&lt;P&gt;Expresses a warm and relaxed tone for digital assistants&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="270"&gt;
&lt;P&gt;United States spans 2 time zones. In Nashville, it's 9:45 PM.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/Jenny%20Assistant.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A new speaking style is also available for the en-US male voice, Guy. &amp;nbsp;Guy’s newscast style can be a great choice for a male voice that can read professional and news related content.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;In addition, 10 new speaking styles are available with our zh-CN voice, Xiaoxiao. These new styles are optimized for audio content creators and intelligent bot developers to create more engaging interactive audios that express rich emotions.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can hear the new speaking styles in Xiaoxiao’s voice below:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="933px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Calm&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Affectionate&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Angry&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;那，那我再问你，你之前有养过宠物嘛？&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/calm.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;老公，把灯打开好吗，好黑呀，我很怕。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/affectionate.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;没想到，我们八年的感情真的完了！&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/angry.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Disgruntled&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Fearful&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Gentle&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;这你都不明白吗？真是个榆木脑袋。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/disgruntled.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;先生，你没事吧？要不要我叫医生过来？&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/fearful.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;我今天运气特别好,如果没有遇到您,还不知道会怎么样呢！&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/gentle.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Cheerful&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Serious&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="311px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Sad&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;太好了，恭喜你顺利通过考核。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/cheerful.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;不要恋战，等待时机，随时准备突围。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/serious.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;没想到，你居然是这么一个无情无义的的人！&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/sad.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For the Chinese voice Xiaoxiao, the intensity (‘style degree’) of speaking style can be further adjusted to better fit your use case. You can specify a stronger or softer style with 'style degree' to make the speech more expressive or subdued.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="622px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="622px"&gt;
&lt;P&gt;没想到，你居然是这么一个无情无义的的人！&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;Sad=0.5&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/Sad%200.5.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;Sad=1.0&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/Sad%201.0.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;Sad=1.5&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/Sad%201.5.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="311px"&gt;
&lt;P&gt;Sad=2.0&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/ignite%20blog/zhCN/Sad%202.0.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The style degree can be adjusted from 0.01 to 2 inclusive. The default value is 1 which means the predefined style intensity will be applied. The minimum unit is 0.01, which softens the style with a flatter tone. The value of 2 is the highest, which makes the style intensity obviously stronger than the default.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SSML snippet below illustrates how the 'style degree' attribute is used to change the intensity of a speaking style.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="1" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;
&lt;P&gt;&amp;lt;speak version="1.0" xmlns="&lt;A href="http://www.w3.org/2001/10/synthesis" target="_blank" rel="noopener"&gt;http://www.w3.org/2001/10/synthesis&lt;/A&gt;"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; xmlns:mstts="&lt;A href="https://www.w3.org/2001/mstts" target="_blank" rel="noopener"&gt;https://www.w3.org/2001/mstts&lt;/A&gt;" xml:lang="zh-CN"&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;voice name="zh-CN-XiaoxiaoNeural"&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;mstts:express-as style="sad" styledegree="2"&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 快走吧，路上一定要注意安全，早去早回。&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/mstts:express-as&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/voice&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;lt;/speak&amp;gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The 'style degree' feature currently only applies to the Chinese voice Xiaoxiao and will come to more languages and voices later soon.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Check &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles" target="_blank" rel="noopener"&gt;SSML&lt;/A&gt; for the details on how to use these speaking styles, together with other rich voice tuning capabilities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Neural TTS Container is in public preview with 16 voices available in 14 languages&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have launched Neural TTS Container in public preview, as we are seeing a clear trend towards a future powered by the intelligent cloud and intelligent edge. With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements. Their Speech apps are portable and scalable with greater consistency whether they run on the edge or in Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Currently 14 languages/locales are supported with 16 voices in Neural TTS Containers, as listed below.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="324"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice &lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;de-de&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;KatjaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-au&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;NatashaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-ca&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;ClaraNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;LibbyNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-gb&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;MiaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-us&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;AriaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;en-us&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;GuyNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;es-es&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;ElviraNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;es-mx&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;DaliaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;fr-ca&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;SylvieNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;fr-fr&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;DeniseNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;it-it&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;ElsaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;ja-jp&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;NanamiNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;ko-kr&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;SunHiNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;pt-br&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;FranciscaNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="167px" height="30px"&gt;
&lt;P&gt;zh-cn&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" height="30px"&gt;
&lt;P&gt;XiaoxiaoNeural&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To get started, fill out and submit the&amp;nbsp;&lt;A href="https://aka.ms/cognitivegate" target="_blank" rel="noopener"&gt;request form&lt;/A&gt;&amp;nbsp;to request access to the container. Currently Neural TTS Containers are gated and only approved for enterprises (EA customers) and Microsoft partners, and to an extent only for qualified customers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Cognitive Services Containers including Neural TTS Containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft. Queries to the container are billed at the pricing tier of the Azure resource that's used for the&amp;nbsp;ApiKey.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Here are the steps of how to install and run the container:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Make sure your machine to host the container meets the &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format#container-requirements-and-recommendations" target="_blank" rel="noopener"&gt;hardware requirements&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Get the container image with &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format#get-the-container-image-with-docker-pull" target="_blank" rel="noopener"&gt;docker pull&lt;/A&gt;. For all the supported locales and corresponding voices of the &lt;STRONG&gt;neural text-to-speech&lt;/STRONG&gt; container, please see &lt;A href="https://docs.microsoft.com/azure/cognitive-services/containers/container-image-tags#neural-text-to-speech" target="_blank" rel="noopener"&gt;Neural Text-to-speech image tags&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Run the container with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format#run-the-container-with-docker-run" target="_blank" rel="noopener"&gt;docker run.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-howto?tabs=ntts%2Ccsharp%2Csimple-format#validate-that-a-container-is-running" target="_blank" rel="noopener"&gt;Validate&lt;/A&gt; that the container is running.&lt;/LI&gt;
&lt;LI&gt;Query the container’s endpoint. Take AriaNeural voice for example, you can run below HTTP post method to get the TTS output audio:&lt;/LI&gt;
&lt;/OL&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="623"&gt;
&lt;P&gt;curl -s -v -X POST http://localhost:5000/speech/synthesize/cognitiveservices/v1 \&lt;/P&gt;
&lt;P&gt;&amp;nbsp;-H 'Accept: audio/*' \&lt;/P&gt;
&lt;P&gt;&amp;nbsp;-H 'Content-Type: application/ssml+xml' \&lt;/P&gt;
&lt;P&gt;&amp;nbsp;-H 'X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm' \&lt;/P&gt;
&lt;P&gt;&amp;nbsp;-d '&amp;lt;speak version="1.0" xml:lang="en-US"&amp;gt;&amp;lt;voice name="en-US-AriaNeural"&amp;gt;This is a test,&amp;nbsp;only a test.&amp;lt;/voice&amp;gt;&amp;lt;/speak&amp;gt;' &amp;gt; output.wav&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Learn more about &lt;A href="https://aka.ms/cscontainers" target="_blank" rel="noopener"&gt;Container support in Cognitive Services&lt;/A&gt; and visit the &lt;A href="https://aka.ms/cscontainers-faq" target="_blank" rel="noopener"&gt;Frequently Asked Questions&lt;/A&gt; on Azure Cognitive Services Containers.&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers globally with flexible deployment options. For more information, visit below.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener noopener noreferrer"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener noopener noreferrer"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our&amp;nbsp;&lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener noreferrer"&gt;sample code&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Learn about &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/index-hosting" target="_blank" rel="noopener"&gt;Speech containers&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Sep 2020 10:54:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-09-25T10:54:41Z</dc:date>
    </item>
    <item>
      <title>Ignite 2020 - Conversational AI updates</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-conversational-ai-updates/ba-p/1691841</link>
      <description>&lt;P&gt;In the 6 months since Microsoft Build 2020, where &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/build-2020-conversational-ai-updates/ba-p/1397685" target="_self"&gt;we made exciting steps forward&lt;/A&gt;, such as the GA availability of Bot Framework Composer and the &lt;A href="https://aka.ms/virtualassistant" target="_self"&gt;Virtual Assistant Solution Accelerator&lt;/A&gt;, we have continued to drive the Conversational AI platform forward - improving the developer experience and meeting the needs of our enterprise customers. Azure Bot Service now handles 2.5 billion messages per month, double the rate announced at Build, with over 525,000 registered developers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our updates for Ignite 2020 include a new release of Bot Framework Composer, a public preview of Orchestrator, providing language understanding arbitration / decision making, optimized for conversational AI applications and version 4.10 of the Bot Framework SDK.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;New release of Bot Framework Composer&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Bot Framework Composer v.1.1.1, released earlier this month, has added a number of significant features to the application, including the addition of creation and management capabilities for QnA Maker knowledgebases. Now, as with the existing integration for LUIS apps, QnA pairs can be added / edited from within Composer, improving overall productivity by removing the need to use a separate portal for these tasks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="composer-qna.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/220163iDE57518664595593/image-size/large?v=v2&amp;amp;px=999" role="button" title="composer-qna.PNG" alt="QnA Maker integration in Composer" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;QnA Maker integration in Composer&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The ability to build bots that target multiple languages has been added, with a user able to produce appropriate LU (language understanding) and LG (language generation) assets in seconds to target one or more alternative locales.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Other enhancements in this release include automatic generation of manifests when developing Bot Framework Skills, the addition of Intellisense for text editing and preview support for a JavaScript bot runtime.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We also continue to deepen our integration with other key partners within Microsoft and starting this fall, users of Power Virtual Agents will be able to create custom dialogs and directly add them to Power Virtual Agents bots. These dialogs can be saved, hosted, and executed together with Power Virtual Agents bot content, providing a simpler way to extend bot capabilities with custom code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The next release of Composer, later this year, will feature further QnA Maker integration, improvements to the authoring canvas and the ability to easily re-use assets built with Composer between projects.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additionally, we are updating the list of &lt;A href="https://microsoft.github.io/botframework-solutions/overview/skills/" target="_self"&gt;pre-built skills&lt;/A&gt;, that we released as part of Virtual Assistant Solution Accelerator 1.0, to be based on Bot Framework Composer and adaptive dialogs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started with Composer today at&amp;nbsp;&lt;A href="http://aka.ms/bfcomposer" target="_blank" rel="noopener"&gt;http://aka.ms/bfcomposer&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Orchestrator public preview!&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Conversational AI applications today are built using multiple technologies to fulfil various language understanding needs, such as LUIS and QnA Maker, as well as often being composed of multiple skills, with each fulfilling a specific conversation topic. Orchestrator answers a critical need for language understanding arbitration and decision making, to route incoming user request to an appropriate skill or to dispatch to a specific sub-component within a bot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Orchestrator is a transformer-based solution, which is heavily optimized for conversational AI applications and runs locally within a bot. You can find more details and try the Orchestrator public preview by visiting &lt;A href="https://aka.ms/bf-orchestrator" target="_blank" rel="noopener"&gt;https://aka.ms/bf-orchestrator&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Later this year, we plan to introduce a preview of Orchestrator support within Composer and the Virtual Assistant Solution Accelerator.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework SDK 4.10 released&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Version &lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/what-is-new?view=azure-bot-service-4.0" target="_self"&gt;4.10 of the Bot Framework SDK is now available&lt;/A&gt;, adding several new supporting features for our key partners. These included enhanced support for building skills for Power Virtual Agents and adding adaptive dialog and additional lifecycle event support for Microsoft Teams.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A core focus of this release was on quality across the entire stack, covering key pillars of &lt;A href="http://aka.ms/botframeworkdocs" target="_self"&gt;documentation&lt;/A&gt;, customer supportability, customer feature requests, code quality and improvements to our internal team agility. Almost 600 GitHub issues were resolved as part of the release, across our SDK languages (C#, JavaScript, Python, Java) and our tools, including accessibility improvements for WebChat.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See the &lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/what-is-new?view=azure-bot-service-4.0" target="_self"&gt;August 2020 release notes&lt;/A&gt; for more details on v4.10 of the SDK.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Bot Service&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to announce that the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-alexa?view=azure-bot-service-4.0" target="_self"&gt;Alexa channel&lt;/A&gt; for&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/bot-service/" target="_self"&gt;Azure Bot Service (ABS)&lt;/A&gt;, which went entered public preview at Build 2020, is now generally available!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In response to feedback from customers, ABS&amp;nbsp;now has &lt;A href="https://aka.ms/bfwhatsapp" target="_self"&gt;support for WhatsApp&lt;/A&gt;, allowing you to surface your bot on the popular chat app, alongside the existing channels available via ABS. Built in partnership with &lt;A href="https://www.infobip.com" target="_self"&gt;InfoBip&lt;/A&gt;, the WhatsApp adapter can be added to your bot within minutes. Get started with WhatsApp integration for Bot Framework at&amp;nbsp;&lt;A href="https://aka.ms/bfwhatsapp" target="_blank" rel="noopener"&gt;https://aka.ms/bfwhatsapp&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of our commitment to customer privacy and security, ABS has introduced support for Azure Lockbox. Lockbox enables approval flows and audits when support engineers require access to customer data and, additionally, we will soon add customer managed encryption keys.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Bot Service now has expanded channel support within the Azure US Government region and, looking ahead, we will be adding a preview of Adaptive Cards 2.0 and SSO (single sign-on) support for the Teams and WebChat channels.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Ignite 2020 sessions&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://myignite.microsoft.com/sessions/58ed4deb-6d2c-4ad9-8e9b-622f53b7d049" target="_self"&gt;Conversational AI Customer and Employee Virtual Assistants&lt;/A&gt;&lt;BR /&gt;Darren Jefford, Group Program Manager, Conversational AI&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://myignite.microsoft.com/sessions/f9896223-587d-43f6-8ae7-b8d666951875" target="_self"&gt;Building Bots with Power Virtual Agents and extending them with Microsoft Bot Framework&lt;/A&gt;&lt;BR /&gt;Marina Kolomiets - Senior Program Manager, Power Virtual Agents&lt;/P&gt;</description>
      <pubDate>Tue, 22 Sep 2020 15:26:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-conversational-ai-updates/ba-p/1691841</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-09-22T15:26:11Z</dc:date>
    </item>
    <item>
      <title>Power your VS Code Notebooks with AzML compute instances!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630</link>
      <description>&lt;DIV&gt;&lt;SPAN&gt;Hey&amp;nbsp;AzML&amp;nbsp;community!&amp;nbsp;The&amp;nbsp;VS&amp;nbsp;Code&amp;nbsp;team&amp;nbsp;is&amp;nbsp;excited&amp;nbsp;to&amp;nbsp;announce&amp;nbsp;version&amp;nbsp;0.6.14&amp;nbsp;of&amp;nbsp;the&amp;nbsp;AzML&amp;nbsp;extension,&amp;nbsp;with&amp;nbsp;added&amp;nbsp;support&amp;nbsp;for&amp;nbsp;Dataset&amp;nbsp;creation&amp;nbsp;from&amp;nbsp;existing&amp;nbsp;Datastores&amp;nbsp;and&amp;nbsp;an&amp;nbsp;awesome&amp;nbsp;new&amp;nbsp;feature&amp;nbsp;that&amp;nbsp;enables&amp;nbsp;you&amp;nbsp;to&amp;nbsp;power-up&amp;nbsp;your&amp;nbsp;local&amp;nbsp;Jupyter&amp;nbsp;Notebooks.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;you'd&amp;nbsp;like&amp;nbsp;to&amp;nbsp;follow-along&amp;nbsp;with&amp;nbsp;the&amp;nbsp;blog&amp;nbsp;post&amp;nbsp;and&amp;nbsp;try&amp;nbsp;out&amp;nbsp;the&amp;nbsp;new&amp;nbsp;features,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;install&amp;nbsp;the&amp;nbsp;extension &lt;A title="AzML extension install link." href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;Powering your VS Code Notebooks with an AzML Compute instance&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Many&amp;nbsp;of&amp;nbsp;you&amp;nbsp;have&amp;nbsp;praised&amp;nbsp;the&amp;nbsp;Jupyter&amp;nbsp;Notebook&amp;nbsp;integration&amp;nbsp;in&amp;nbsp;VS&amp;nbsp;Code&amp;nbsp;as&amp;nbsp;it's&amp;nbsp;become&amp;nbsp;an&amp;nbsp;integral&amp;nbsp;part&amp;nbsp;of&amp;nbsp;your data-science workflow. You've also exclaimed that sometimes you want to run your Notebooks against more powerful machines without having to deal with SSH and connecting to remote servers.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;The AzML extension now provides a&amp;nbsp;&lt;STRONG&gt;highly streamlined&lt;/STRONG&gt; way of connecting your local Jupyter Notebooks to a compute instance. We're authenticating using your Azure credentials and eliminating previous manual steps for connecting to the remote server.&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;It's&amp;nbsp;&lt;STRONG&gt;extremely&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;easy&amp;nbsp;to&amp;nbsp;get&amp;nbsp;started&amp;nbsp;-&amp;nbsp;you&amp;nbsp;can&amp;nbsp;simply&amp;nbsp;click&amp;nbsp;on&amp;nbsp;the&amp;nbsp;"Jupyter&amp;nbsp;server:&amp;nbsp;"&amp;nbsp;button&amp;nbsp;in&amp;nbsp;the&amp;nbsp;Notebook&amp;nbsp;toolbar&amp;nbsp;or&amp;nbsp;invoke&amp;nbsp;the&amp;nbsp;"Azure&amp;nbsp;ML:&amp;nbsp;Connect&amp;nbsp;to&amp;nbsp;compute&amp;nbsp;instance&amp;nbsp;Jupyter&amp;nbsp;server"&amp;nbsp;command.&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="remote_server_connect.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216142iCD97B32FA2DDEA9B/image-size/large?v=v2&amp;amp;px=999" role="button" title="remote_server_connect.gif" alt="Invoke remote Jupyter server connection" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Invoke remote Jupyter server connection&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;In&amp;nbsp;case&amp;nbsp;you're&amp;nbsp;unable&amp;nbsp;to&amp;nbsp;see&amp;nbsp;the&amp;nbsp;"Azure&amp;nbsp;ML:&amp;nbsp;Compute&amp;nbsp;Instances"&amp;nbsp;list&amp;nbsp;option,&amp;nbsp;it's&amp;nbsp;likely&amp;nbsp;that&amp;nbsp;you&amp;nbsp;don't&amp;nbsp;have&amp;nbsp;the &lt;A title="Link to AzML extension installation page" href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;AzML extension installed&lt;/A&gt;. It might take a little bit of time for the quick-pick options to show up as you may be activating the AzML and Python extensions for the first time.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Following&amp;nbsp;the&amp;nbsp;extension's&amp;nbsp;guided&amp;nbsp;prompts,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;select&amp;nbsp;from&amp;nbsp;a&amp;nbsp;list&amp;nbsp;of&amp;nbsp;compute&amp;nbsp;instances&amp;nbsp;in&amp;nbsp;your&amp;nbsp;workspace.&amp;nbsp;You&amp;nbsp;can&amp;nbsp;also&amp;nbsp;create&amp;nbsp;a&amp;nbsp;new&amp;nbsp;instance&amp;nbsp;by&amp;nbsp;simply&amp;nbsp;providing&amp;nbsp;a&amp;nbsp;name&amp;nbsp;and&amp;nbsp;a&amp;nbsp;VM&amp;nbsp;size.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;As&amp;nbsp;a&amp;nbsp;friendly&amp;nbsp;reminder,&amp;nbsp;if&amp;nbsp;you&amp;nbsp;don't&amp;nbsp;have&amp;nbsp;any&amp;nbsp;workspaces&amp;nbsp;you&amp;nbsp;can&amp;nbsp;create&amp;nbsp;one&amp;nbsp;via&amp;nbsp;the&amp;nbsp;"Azure&amp;nbsp;ML:&amp;nbsp;Create&amp;nbsp;Workspace"&amp;nbsp;command.&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci_select_create.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216143iDFE8E6A9BADD323D/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci_select_create.gif" alt="Select or create a new compute instance" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Select or create a new compute instance&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Once&amp;nbsp;you've&amp;nbsp;selected&amp;nbsp;a&amp;nbsp;compute&amp;nbsp;instance,&amp;nbsp;you&amp;nbsp;will&amp;nbsp;be&amp;nbsp;prompted&amp;nbsp;to&amp;nbsp;reload&amp;nbsp;your&amp;nbsp;VS&amp;nbsp;Code&amp;nbsp;window.&amp;nbsp;After&amp;nbsp;reloading&amp;nbsp;the&amp;nbsp;window&amp;nbsp;and&amp;nbsp;reopening&amp;nbsp;your&amp;nbsp;Notebook,&amp;nbsp;you&amp;nbsp;must&amp;nbsp;&lt;STRONG&gt;run a cell&lt;/STRONG&gt; to initiate the compute instance connection.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci_connect.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216144i0E6EF4465AA69451/image-size/large?v=v2&amp;amp;px=999" role="button" title="ci_connect.gif" alt="Run a cell to connect to the compute instance" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Run a cell to connect to the compute instance&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Voila!&amp;nbsp;Your&amp;nbsp;local&amp;nbsp;Jupyter&amp;nbsp;Notebook&amp;nbsp;is&amp;nbsp;now&amp;nbsp;running&amp;nbsp;against&amp;nbsp;your&amp;nbsp;AzML&amp;nbsp;compute&amp;nbsp;instance.&amp;nbsp;You&amp;nbsp;gain&amp;nbsp;all&amp;nbsp;the&amp;nbsp;benefits&amp;nbsp;of&amp;nbsp;using&amp;nbsp;Notebooks&amp;nbsp;in&amp;nbsp;VS&amp;nbsp;Code,&amp;nbsp;coupled&amp;nbsp;with&amp;nbsp;the&amp;nbsp;benefits&amp;nbsp;of&amp;nbsp;running&amp;nbsp;against&amp;nbsp;a&amp;nbsp;more&amp;nbsp;powerful&amp;nbsp;remote&amp;nbsp;compute.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;For&amp;nbsp;more&amp;nbsp;detailed&amp;nbsp;step-by-step&amp;nbsp;instructions&amp;nbsp;you&amp;nbsp;can&amp;nbsp;follow&amp;nbsp;our &lt;A title="Remote Connection Documentation Link" href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-vs-code-remote" target="_blank" rel="noopener"&gt;docs&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Creating a Dataset from an existing Datastore&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;With&amp;nbsp;our&amp;nbsp;0.6.12&amp;nbsp;(May)&amp;nbsp;release,&amp;nbsp;the&amp;nbsp;AzML&amp;nbsp;extension&amp;nbsp;added&amp;nbsp;support&amp;nbsp;for&amp;nbsp;creating&amp;nbsp;datasets&amp;nbsp;from&amp;nbsp;directly&amp;nbsp;within&amp;nbsp;VS&amp;nbsp;Code.&amp;nbsp;Up&amp;nbsp;until&amp;nbsp;now&amp;nbsp;you&amp;nbsp;could&amp;nbsp;only&amp;nbsp;create&amp;nbsp;a&amp;nbsp;dataset&amp;nbsp;using&amp;nbsp;local&amp;nbsp;files&amp;nbsp;or&amp;nbsp;a&amp;nbsp;web&amp;nbsp;URL.&amp;nbsp;The&amp;nbsp;extension&amp;nbsp;now&amp;nbsp;allows&amp;nbsp;you&amp;nbsp;to&amp;nbsp;use&amp;nbsp;an&amp;nbsp;existing&amp;nbsp;datastore&amp;nbsp;to&amp;nbsp;create&amp;nbsp;a&amp;nbsp;dataset.&amp;nbsp;Following&amp;nbsp;the&amp;nbsp;guided&amp;nbsp;prompts,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;choose&amp;nbsp;from&amp;nbsp;a&amp;nbsp;list&amp;nbsp;of&amp;nbsp;registered&amp;nbsp;datastores&amp;nbsp;and&amp;nbsp;then&amp;nbsp;the&amp;nbsp;absolute&amp;nbsp;path&amp;nbsp;to&amp;nbsp;your&amp;nbsp;data.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="datastore_dataset_creation.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/216145iF2CD81A77815718B/image-size/large?v=v2&amp;amp;px=999" role="button" title="datastore_dataset_creation.gif" alt="Create a dataset from an existing datastore" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Create a dataset from an existing datastore&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;&lt;FONT size="4"&gt;Feedback&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;new&amp;nbsp;Notebook&amp;nbsp;and&amp;nbsp;compute&amp;nbsp;instance&amp;nbsp;integration&amp;nbsp;is&amp;nbsp;still&amp;nbsp;in&amp;nbsp;its&amp;nbsp;preleminary&amp;nbsp;phase&amp;nbsp;and&amp;nbsp;we're&amp;nbsp;actively&amp;nbsp;working&amp;nbsp;on&amp;nbsp;supporting&amp;nbsp;a&amp;nbsp;broader&amp;nbsp;set&amp;nbsp;of&amp;nbsp;scenarios:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Viewing and interacting with your remote server's filesystem.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Mounting a dataset onto the compute instance.&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Supporting remote Jupyter server UI changes in the &lt;A title="Native notebook editor blog post link" href="https://devblogs.microsoft.com/python/notebooks-are-getting-revamped/" target="_blank" rel="noopener"&gt;new native notebook editor&lt;/A&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;there's&amp;nbsp;anything&amp;nbsp;that&amp;nbsp;you&amp;nbsp;would&amp;nbsp;like&amp;nbsp;us&amp;nbsp;to&amp;nbsp;prioritize,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;let&amp;nbsp;us&amp;nbsp;know&amp;nbsp;on &lt;A title="Azure ML extension github issues page." href="https://github.com/microsoft/vscode-tools-for-ai/issues/new" target="_blank" rel="noopener"&gt;Github&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;you&amp;nbsp;would&amp;nbsp;like&amp;nbsp;to&amp;nbsp;provide&amp;nbsp;feedback&amp;nbsp;on&amp;nbsp;the&amp;nbsp;overall&amp;nbsp;extension,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;do&amp;nbsp;so&amp;nbsp;via&amp;nbsp;our &lt;A title="AzML extension survey link" href="http://aka.ms/aml-ext-survey" target="_blank" rel="noopener"&gt;survey&lt;/A&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 03 Sep 2020 22:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630</guid>
      <dc:creator>Sid_Unnithan</dc:creator>
      <dc:date>2020-09-03T22:30:00Z</dc:date>
    </item>
    <item>
      <title>Accelerating AI for COVID-19 on Microsoft Azure Machine Learning using Clara Imaging from NVIDIA NGC</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerating-ai-for-covid-19-on-microsoft-azure-machine-learning/ba-p/1536595</link>
      <description>&lt;P aria-level="1"&gt;&lt;EM&gt;This post was co-authored by&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt; Krishna Anumalasetty, Tom Drabas, Nalini Chandhi &lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;from Microsoft &lt;/EM&gt;&lt;EM&gt;and&amp;nbsp;&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt;Abhilash Somasamudramath, Manuel Reyes Gomez, Brad Genereaux, Akhil Docca &lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;from NVIDIA&lt;/EM&gt;&lt;/P&gt;
&lt;P aria-level="1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="1"&gt;&lt;EM&gt;The following three videos will walk you through the major steps outlined in this blog and will help you in implementing the solution.&lt;/EM&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-level="1"&gt;&lt;EM&gt;&lt;A title="Azure Machine Learning Workspace Setup" href="https://channel9.msdn.com/Shows/Docs-AI/Part-1-Accelerating-AI-for-COVID-19-Setting-up-an-Azure-ML-workspace" target="_blank" rel="noopener"&gt;Setting up Azure Machine Learning Workspace&lt;/A&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI aria-level="1"&gt;&lt;A title="Installing AzureML-NGC tool kit that prepares the environment including downloading the Clara libraries etc." href="https://channel9.msdn.com/Shows/Docs-AI/Part-2-Accelerating-AI-for-COVID-19-Using-the-AzureML-NGC-Toolkit" target="_blank" rel="noopener"&gt;&lt;EM&gt;Installing the AzureML-NGC Toolkit&lt;/EM&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI aria-level="1"&gt;&lt;A href="https://channel9.msdn.com/Shows/Docs-AI/Part-3-Accelerating-AI-for-COVID-19-Fine-tuning-the-pretrained-COVID-19-Image-Classification-Model" target="_blank" rel="noopener"&gt;&lt;EM&gt;Finetuning the pre-trained model&lt;/EM&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P aria-level="1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Shows annotated images classified for likelihood of COVID-19 by the fine-tuned Clara COVID-19 CT-Scan Classifier model" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215876iB933DD7D41BAFD54/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_0-1598982910472.png" alt="Krishna_Anumalasetty_0-1598982910472.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 1. CT-Scan Images Classified for COVID-19 Probability&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;COVID-19 has fundamentally transformed the world we live in. As the scientific community across the globe unites in the face of this pandemic, it is crucial to enable researchers to collaborate and leverage tools that will speed up detection and drug discovery for COVID-19. The power of AI in radiological medical imaging is helping with faster detection, segmentation, and notifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Leveraging AI for healthcare applications is challenging for many reasons. The complicated science required to build and train deep learning neural networks, setup and maintenance of the supporting infrastructure required to develop, deploy, and manage these applications at scale, pose barriers towards addressing our most pressing healthcare challenges with AI.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cloud computing has enabled researchers with easy access to scalable, on-demand infrastructure to get their AI applications up and running quickly. In the medical imaging space, the real need is for a platform that combines the power of NVIDIA GPUs with a secure environment that also allows easy access to AI software. &lt;A href="https://developer.nvidia.com/clara?ncid=progr-22696#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NVIDIA Clara™&lt;/A&gt; is a full-stack GPU-accelerated healthcare framework accelerating the use of AI for medical research and is available on the &lt;A href="https://www.nvidia.com/en-us/gpu-cloud/?ncid=progr-55518#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NVIDIA NGC&lt;/A&gt; Catalog.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A target="_blank" name="_Toc46160340"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc48730727"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc49186611"&gt;&lt;/A&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Azure Machine Learning&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;A title="Azure Machine Learning" href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_self"&gt;Azure Machine Learning&lt;/A&gt; (Azure ML) empowers developers, data scientists, machine learning engineers, and AI engineers to build, train, deploy, and manage machine learning models. It is an open platform with built-in support for open-source tools and frameworks, such PyTorch, SciKit Learn and, TensorFlow along with numerous Integrated Developer Environments (IDEs), supporting key &amp;nbsp;languages like Python and R.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure ML abstracts the set-up, installation and configuration of the machine learning environment, saving you the hassle of infrastructure management by &amp;nbsp;taking care of the underlying technicalities. This enables domain experts, such as healthcare researchers and developers, to build mission-critical AI solutions faster and easier. Whether the project requires image classification, object detection, speech analysis, or natural language processing. Azure ML streamlines AI-powered solution development.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Productivity is also boosted by accelerating your training jobs with Azure’s global infrastructure. Enhance your workflow by scaling out to multi-node compute clusters or scaling up to powerful GPU-enabled machines. Combining this with end-to-end AI lifecycle management through industry-leading MLOps means data science teams can collaborate better and get to production quicker. And, with more than 60 compliance certifications including FedRAMP High, HIPAA and DISA IL5, plus configurable security features, Azure ML allows you to create a trusted working environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc46160341"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc48730728"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc49186612"&gt;&lt;/A&gt;NVIDIA NGC Catalog and Clara&lt;/H2&gt;
&lt;P&gt;A key component of the NVIDIA AI ecosystem is the &lt;A href="https://ngc.nvidia.com?ncid=progr-10367#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NGC&lt;/A&gt; Catalog. It is a software hub of GPU-optimized AI, HPC, and data analytics software built to simplify and accelerate end-to-end workflows. With over 150 enterprise-grade containers, 100+ models, and industry-specific SDKs that can be deployed on-premise, cloud or at the edge, the NGC Catalog enables data scientists and developers to build best-in-class solutions, gather insights, and deliver business value faster. Every single asset in the NGC Catalog is validated for performance, quality, and security by NVIDIA, providing you with the confidence needed to deploy within the Azure ML environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The deep learning containers in the NGC Catalog are updated and fine-tuned monthly to enable maximum performance through software-driven optimizations that augment GPU hardware acceleration. These performance improvements are made to libraries and runtimes to extract maximum performance from NVIDIA GPUs. Pre-trained models from the NGC Catalog help speed up the application building process. You can find more than &lt;A href="https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&amp;amp;pageNumber=1&amp;amp;query=&amp;amp;quickFilter=models&amp;amp;filters=&amp;amp;ncid=progr-25859#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;100 pre-trained models&lt;/A&gt; across a wide array of applications such as image analysis, natural language processing, speech processing and recommendation systems. The models are curated and tuned to perform optimally on NVIDIA GPUs for maximum performance. By applying transfer learning, you can create custom models by retraining it against your own data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://developer.nvidia.com/clara?ncid=progr-22696#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NVIDIA Clara&lt;/A&gt; is one many artifacts available in the NGC Catalog. Clara is a healthcare framework that comprises of full-stack GPU-accelerated libraries, SDKs, and reference applications to create secure, and scalable healthcare applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Clara family of application frameworks include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Clara Imaging:&lt;/STRONG&gt; Application frameworks for accelerating the development and deployment of AI based medical imaging workflows in radiology and pathology, as well as in some medical instruments&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Clara Parabricks:&lt;/STRONG&gt; Computational framework supporting genomic analysis in DNA and RNA&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Clara Guardian:&lt;/STRONG&gt; Application framework and partner ecosystem that simplifies the development and deployment of smart sensors within the hospital with multimodal AI&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Clara Imaging, the focus of this blog, offers easy-to-use, domain-optimized tools to create high-quality, labeled datasets, collaborative techniques to train robust AI models, and end-to-end software for scalable and modular AI deployments. It consists of two essential elements – Clara Train and Clara Deploy:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Clara Train &lt;/STRONG&gt;is a framework that includes two main libraries; AI-Assisted Annotation (AIAA), which enables medical viewers to rapidly create annotated datasets suitable for training, and a Training Framework, a TensorFlow based framework to kick start AI development with techniques like transfer learning, federated learning, and AutoML.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Clara Deploy&lt;/STRONG&gt; provides a container-based development and deployment framework for multi-AI, multi-domain workflows in smart hospitals for imaging, genomics, and signal processing workloads. It leverages Kubernetes to enable developers and data scientists to define a multi-staged container-based pipeline.&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Shows the entire Clara pipeline that consists of pre-trained models, AI assisted annotation, training, and deployment framework to address the end-to-end workflow" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215877i6C6BBB198AA15490/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_1-1598982910499.png" alt="Krishna_Anumalasetty_1-1598982910499.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 2. The entire Clara pipeline that includes Clara train and deploy&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The entire Clara Framework can be easily accessed from the &lt;A href="https://developer.nvidia.com/clara?ncid=progr-22696#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;Clara portfolio&lt;/A&gt; page. In addition to Clara Train and Deploy, reference models and pipelines are also available for download. Applying transfer learning, developers can create new models with their own custom dataset.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="NVIDIA NGC carries a whole host of pre-trained models for many use cases. Models cover both 2D and 3D segmentation and classification using various networks such as Res-UNet and DenseNet 121" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215878i1928E840877B8CFB/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_2-1598982910564.png" alt="Krishna_Anumalasetty_2-1598982910564.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 3. Various types of pre-trained models that cover both 2D and 3D segmentation and classification for different use cases&lt;/EM&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;A target="_blank" name="_Toc49186613"&gt;&lt;/A&gt;NGC-AzureML Quick Launch Toolkit&lt;/H1&gt;
&lt;P&gt;A ready-to-use Jupyter notebook was created to showcase the fine-tuning of a &lt;A href="https://ngc.nvidia.com/catalog/models/nvidia:med:clara_train_covid19_3d_ct_classification?ncid=progr-19529#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;pre-trained COVID-19 CT Scan Classification model&lt;/A&gt; from the NGC Catalog.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To help automate the deployment, we have developed the &lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:amlquicklaunch_config_clara_covid19_ctscan_example?ncid=progr-95598#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NGC-AzureML Quick Launch toolkit&lt;/A&gt; that leverages the Azure ML SDK and creates the necessary compute and software resources needed to run Machine Learning applications. The Azure ML SDK uses &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;Azure Machine Learning Compute Clusters&lt;/A&gt;, which require their own quota, and can be created using the same &lt;A href="https://docs.microsoft.com/en-us/azure/azure-portal/supportability/per-vm-quota-requests#:~:text=Request%20a%20standard%20quota%20increase%20from%20Subscriptions,-To%20request%20a&amp;amp;text=and%20select%20Subscriptions.-,Select%20the%20subscription%20whose%20quota%20you%20want%20to%20increase.,%2DvCPUs)%20subscription%20limit%20increases." target="_blank" rel="noopener"&gt;mechanism&lt;/A&gt; as the process followed to setup a quota for Azure VMs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;EM&gt;Azure Machine Learning Workspace is a prerequisite and a video on setting up the Azure Machine Learning workspace can be found at&amp;nbsp;&lt;/EM&gt;&lt;EM style="font-family: inherit;"&gt;&lt;A href="https://channel9.msdn.com/Shows/Docs-AI/Part-1-Accelerating-AI-for-COVID-19-Setting-up-an-Azure-ML-workspace" target="_self"&gt;Setting up Azure Machine Learning Workspace&lt;/A&gt;. The video that walks through on how to install the AzureML-NGC toolkit described in this section can be found at&amp;nbsp;&lt;EM&gt;&lt;A title="Installing AzureML-NGC tool kit that prepares the environment including downloading the Clara libraries etc." href="https://channel9.msdn.com/Shows/Docs-AI/Part-2-Accelerating-AI-for-COVID-19-Using-the-AzureML-NGC-Toolkit" target="_blank" rel="noopener"&gt;Installing the AzureML-NGC Toolkit&lt;/A&gt;&amp;nbsp;&lt;/EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The toolkit can be used to launch any asset from the NGC Catalog, by just changing a config file. In our example, the toolkit takes relevant assets for Clara, but you can customize it to pull assets related to other uses cases such as computer vision, natural language understanding, and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The toolkit automates the steps outlined below:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An Azure instance with NVIDIA GPUs is configured with the right NVIDIA libraries and drivers&lt;/LI&gt;
&lt;LI&gt;Setting the desired NGC Catalog image (Clara Train SDK in this example) onto the Azure instance&lt;/LI&gt;
&lt;LI&gt;Uploading additional material: model(s) (the pre-trained COVID-19 CT Scan Classifier in this case), auxiliary code and the corresponding datasets from the NGC Catalog to the Azure instance&lt;/LI&gt;
&lt;LI&gt;Loading the ready-to-use Jupyter notebook from the NGC Catalog that contains the application&lt;/LI&gt;
&lt;LI&gt;Installing JupyterLab the Azure instance and accessible locally to run the ready-to-use Jupyter notebook&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To set up the AzureML environment, you only need to run two commands in the command line interface (CLI):&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A target="_blank" name="_Toc49186615"&gt;&lt;/A&gt;Install azureml-ngc-tools&lt;/H3&gt;
&lt;P&gt;First, install the NGC-AzureML Quick Launch Toolkit on the local machine, via Pip:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;pip install azureml-ngc-tools&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A target="_blank" name="_Toc49186616"&gt;&lt;/A&gt;Configure azure_config.json&lt;/H3&gt;
&lt;P&gt;This file contains the Azure credentials and desired choice of instance type. A ready-to-use template for this example to complete can be downloaded &lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:amlquicklaunch_config_clara_covid19_ctscan_example?ncid=progr-95598#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;here&lt;/A&gt;, which should be edited with user credentials.&lt;/P&gt;
&lt;P&gt;An example of how the azure_config.json file might look is the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "azureml_user":
    {
        "subscription_id": "ab221ca4-f098-XXXXXXXXX-5073b3851e68",
        "resource_group": "TutorialTestA",
        "workspace_name": "TutorialTestA1",
        "telemetry_opt_out": true
    },
    "aml_compute"
    {
        "ct_name":"clara-ct",
        "exp_name":"clara-exp",
        "vm_name":"Standard_NC12s_v3",
        "admin_name": "clara",
        "min_nodes":0,
        "max_nodes":1,
        "vm_priority": "dedicated",
        "idle_seconds_before_scaledown":300,
        "python_interpreter":"/usr/bin/python",
        "conda_packages":["matplotlib","jupyterlab"],
        "environment_name":"clara_env",
        "docker_enabled":true,
        "user_managed_dependencies":true,
        "jupyter_port":9000
    }
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When the above file is used with the azureml-ngc-tools command, an &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;Azure Machine Learning Compute Cluster&lt;/A&gt; named “clara-ct” is created using a node from the “Standard_NC12s_v3” VM size. Follow this &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;link&lt;/A&gt; to learn more about the other specifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This file lists references to all the NGC Catalog assets that are to be pre-installed in the Azure ML environment to achieve the specific use-case. In this example, the additional resources are the ready-to-use Jupyter notebook and the associated data files. A ready-to-use config file lists the various assets needed for the application below, can be downloaded &lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:amlquicklaunch_config_clara_covid19_ctscan_example?ncid=progr-95598#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;here&lt;/A&gt;. No additional modifications are required for this file.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The ngc_config.json file looks like:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
    "base_dockerfile":"nvcr.io/nvidia/clara-train-sdk:v3.0",
    "additional_content": {
        "download_content": true,
        "unzip_content": true,
        "upload_content": true,
        "list":[
            {
"url":"https://api.ngc.nvidia.com/v2/models/nvidia/med/clara_train_covid19_3d_ct_classification/versions/1/zip",
                "filename":"clara_train_covid19_3d_ct_classification_1.zip",
                "localdirectory":"clara/experiments/covid19_3d_ct_classification-v2",
                "computedirectory":"clara/experiments/covid19_3d_ct_classification-v2",
                "zipped":true
            },
            {
"url":"https://api.ngc.nvidia.com/v2/resources/nvidia/med/getting_started/versions/1/zip",
                "filename":"clarasdk.zip",
                "localdirectory":"clara",
                "computedirectory":"clara",
                "zipped":true
            },
            {
"url":"https://api.ngc.nvidia.com/v2/resources/nvidia/azuremlclarablogquicklaunch/versions/example/zip",
                "filename":"claractscanexample.zip",
                "localdirectory":"clara/claractscanexample",
                "computedirectory":"clara",
                "zipped":true
            }
        ]
    }
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When used with the azureml-ngc-tools command, the “nvcr.io/nvidia/clara-train-sdk:v3.0” container is pulled from the NGC Catalog and loaded onto the Azure ML Compute Cluster. Additionally, three resources are downloaded and unzipped into the local environment (with the names "filename" and at the relative directories "localdirectory" provided) and then loaded into the Compute Cluster at the provided location ("computedirectory").&lt;/P&gt;
&lt;H3&gt;&lt;A target="_blank" name="_Toc49186618"&gt;&lt;/A&gt;Run azureml-ngc-tools&lt;/H3&gt;
&lt;P&gt;Once the two configuration files are ready, run the azureml-ngc-tools command on the local machine to provision the instance:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="bash"&gt;azureml-ngc-tools --login azure_config.json --app ngc_config.json&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Refer to the following screenshot:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="NGC-Azure Machine Learning Quick Launch toolkit optimally configures an AzureML environment for a particular use-case with necessary software from the NGC Catalog pre-loaded" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215879iAACC12604C4B4880/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_4-1598982910666.png" alt="Krishna_Anumalasetty_4-1598982910666.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 4. AzureML instance being setup by NGC-AzureML Quick Launch Toolkit&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The command creates a Compute Cluster "clara-ct" using vmSize :"Standard_NC12s_v3,” then it creates &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment?view=azure-ml-py" target="_blank" rel="noopener"&gt;AzureML environment&lt;/A&gt;, "clara_env,” with base image “nvcr.io/nvidia/clara-train-sdk:v3.0.” The container is downloaded and subsequently cached after the first use, making it easier for you to reuse the container the next time.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Next, the additional NGC Catalog content is downloaded and unzipped locally, which is then uploaded to the Compute Cluster. The user should be able to see the newly created Compute Cluster, "clara-ct", in the Azure Portal under the specified workspace, "TutorialTestA1" in this example and tabs Compute -&amp;gt;Compute clusters. This should look like the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The compute resources instantiated and configured by the NGC-AzureML Quick Launch toolkit can be viewed in the Azure Console as well" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215880i4CB2FE2280152D2B/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_5-1598982910675.png" alt="Krishna_Anumalasetty_5-1598982910675.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 5. Console view of AzureML Environment Setup by NGC-AzureML Quick Launch Toolkit&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additional Clara introductory content such as examples of other pre-trained model, datasets and Jupyter notebooks are also downloaded and unzipped locally at the relative directories provided in the “ngc_config.json” file. That content is also uploaded to the Compute Cluster, visible as follows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The software assets from the NGC Catalog to be pre-loaded on the AzureML instance are listed in the ngc_config.json file. This is pre-configured for each use-case and can also be modified as per your need" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215882iCCD391C70600EBDB/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_6-1598982910684.png" alt="Krishna_Anumalasetty_6-1598982910684.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 6. Assets from the NGC Catalog pre-loaded onto Azure ML by NGC-AzureML Quick Launch toolkit&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The command creates and launches JupyterLab in the Compute Cluster and forwards the port to the local machine so that the Jupyter Lab is accessed on your local machine. A URL is generated that contains the link to JupyterLab running on the Azure ML instance setup, along with the assets specified in the “ngc_app.json” file to be used on your local machine.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Upon successful setup, NGC-AzureML Quick Launch toolkit gives you a URL taking you directly to a fully setup, optimally configured running AzureML JupyterLab where you can start building right away" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215884i26AC2D984781D29A/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_7-1598982910696.png" alt="Krishna_Anumalasetty_7-1598982910696.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 7. Direct URL to fully setup and configured Azure ML JupyterLab for this particular use-case &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the you have completed your work on the JupyterLab, the Compute Cluster can be stopped by simply entering CTRL+C on the terminal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The JupyterLab can be launched by copying and pasting the URL into a browser window, as follows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The URL produced by the NGC-AzureML Quick Launch toolkit simply needs to be copied over into your browser of choice to gain access directly to a ready-to-use JupyterLab environment running on AzureML" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215961iC3151587E41C4CE8/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_0-1599017677253.png" alt="Krishna_Anumalasetty_0-1599017677253.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 8. URL opened on any browser window launches ready-to-use Azure ML JupyterLab&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note that it starts at the workspace root folder for the provided workspace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To access the content uploaded by the NGC-AzureML Quick Launch Toolkit, you should navigate to the “workspaceblobstore/clara/” folder. All your relevant content will be now available in the session:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The required assets from NGC listed in the pre-configured ngc_config.json file for this use-case are uploaded behind the scenes during setup" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215886iE2F1DA392A2E7718/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_9-1598982910724.png" alt="Krishna_Anumalasetty_9-1598982910724.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 9. Overview of NGC assets pre-loaded onto your AzureML JupyterLab for this use-case&lt;/EM&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;A target="_blank" name="_Toc49186619"&gt;&lt;/A&gt;Fine-Tuning a Model with Clara Imaging and Azure&amp;nbsp; ML&lt;/H1&gt;
&lt;P&gt;A video that walks through on how fine tune a model described in this section can be found at&amp;nbsp;&lt;A href="https://channel9.msdn.com/Shows/Docs-AI/Part-3-Accelerating-AI-for-COVID-19-Fine-tuning-the-pretrained-COVID-19-Image-Classification-Model" target="_self"&gt;Fine Tuning the pre-trained model&lt;/A&gt;. To demonstrate how to build and deploy AI for medical imaging using Clara Imaging and Azure ML, we will fine-tune a pre-trained &lt;A href="https://ngc.nvidia.com/catalog/models/nvidia:med:clara_train_covid19_3d_ct_classification?ncid=progr-19529#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;COVID-19 CT Scan Classification model&lt;/A&gt; and optimize it for inference with a custom dataset. This pre-trained model was developed by &lt;A href="https://developer.nvidia.com/clara?ncid=progr-22696#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NVIDIA Clara&lt;/A&gt; researchers in collaboration with the NIH, which had a repository of CT radiological images from around the world. The NVIDIA pre-trained model reports an accuracy of over 90% on classification of COVID vs. Non-COVID findings from chest CT scans. More information about the training methodology and results achieved by this pre-trained model are contained in a white paper &lt;A href="https://www.nature.com/articles/s41467-020-17971-2" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The model uses two inputs-- a CT scan image, and a lung segmentation image. A computerized tomography (CT) scan is a 3D medical imaging procedure that uses computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) slices of images. The data needs to be preprocessed, first by converting to Hounsfield units and then rotated to a prescribed orientation, before it can be used for training. A multitude of other pre-trained models developed by NVIDIA Clara for various healthcare applications are made available for free in the NGC Catalog. (N.B. These models are provided for research purposes only.)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following images show one slice of the stack of images that comprise of the patient’s CT and the corresponding mask:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The model requires two images as inputs, a CT scan image, and a lung segmentation mask, to guide the model to focus on the lung area" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215887i4BF90705F0931AB3/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_10-1598982910744.png" alt="Krishna_Anumalasetty_10-1598982910744.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 10. Example CT-Scan Slice with the corresponding mask (on the right) to focus the model on the lungs&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The classifier produces a probability between zero and one, indicating whether the patient has or has not been infected with COVID-19. &amp;nbsp;However, such predictions require a high degree of accuracy and in most cases, the CT scans used for training may have different characteristics from the original data set used to build the base model. Thus, a model will need to be fine-tuned to achieve the necessary accuracy. NVIDIA Clara includes a &lt;A href="https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v2.0/aiaa/model_finetune.html" target="_blank" rel="noopener"&gt;fine-tuning mechanism&lt;/A&gt; to efficiently adapt the pre-trained model to your dataset, with minimal effort, while increasing the accuracy of your model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The hospital data for the blog is simulated using 40 labelled data from two sources:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;CT-scans from COVID patients on the &lt;A href="https://www.kaggle.com/andrewmvd/covid19-ct-scans" target="_blank" rel="noopener"&gt;COVID-19 CT scans Kaggle Database&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;CT-scans from non-COVID CT-scans from &lt;A href="https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI" target="_blank" rel="noopener"&gt;The Cancer Imaging Archive&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;A target="_blank" name="_Toc49186620"&gt;&lt;/A&gt;&lt;SPAN&gt;Running the &lt;/SPAN&gt;&lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:azuremlclarablogquicklaunch/files?version=example&amp;amp;ncid=progr-31403#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;CovidCT-ScanClassifier.ipynb&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt; Jupyter Notebook&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The ready-to-use Jupyter notebook example code included with this blog walks through these 4 stages" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215885i61E7A7BC68C6A237/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_11-1598982910746.png" alt="Krishna_Anumalasetty_11-1598982910746.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 11. The Jupyter notebook for this blog example walks through these steps&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc49186621"&gt;&lt;/A&gt;&lt;SPAN&gt;Step 1: Set Up&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The ready-to-use Jupyter notebook first describes the mechanics of how the pre-trained model is built, and how it can be fine-tuned by introducing the concept of a MMAR (&lt;U&gt;M&lt;/U&gt;edical &lt;U&gt;M&lt;/U&gt;odel &lt;U&gt;AR&lt;/U&gt;chive). The dataset from Kaggle is downloaded (by previously obtaining the user’s Kaggle key) and the input examples are examined and visualized. The new data is then indexed in a way that Clara can refer to it for different tasks, such as inferring if the patient has COVID-19 or not, fine tuning, and/or deploying the model. Refer to the &lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:azuremlclarablogquicklaunch/files?version=example&amp;amp;ncid=progr-31403#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;CovidCT-ScanClassifier.ipynb Jupyter notebook&lt;/A&gt; to see how all these tasks are done with more details.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The data set that we are using to fine-tune the model consists of 40 samples studies. To test the current accuracy of the model, we’ll use three data points from the set. We’ll exclude these three data points when we eventually fine-tune the model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Jupyter notebook separates those example as seen here:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Since there are only 40 examples in the Kaggle dataset being used in this example (reference dataset) for fine-tuning, we separate the data to be used for fine-tuning (training) and testing for inference" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215888iF721575E01D07546/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_12-1598982910756.png" alt="Krishna_Anumalasetty_12-1598982910756.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 12. Slicing the reference dataset for training and testing&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;SPAN&gt;Step 2 Classify/Infer on Test Data&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Use the infer.sh command to check the accuracy of the model by using the test data.&lt;/P&gt;
&lt;P&gt;The MMAR from the base model has the original configuration files used to train the base model with. Those files need to be adapted to the new data before the infer.sh command is run. The Jupyter notebook will execute all the modifications needed to the required files.&lt;/P&gt;
&lt;P&gt;Once the configuration files have been adapted, the infer command is run on the test data.&lt;/P&gt;
&lt;H3&gt;&lt;A target="_blank" name="_Toc49186622"&gt;&lt;/A&gt;Inspecting the inference results&lt;/H3&gt;
&lt;P&gt;The Jupyter Notebook then retrieves the probabilities produced by the “new_infer.sh” command and estimates the predicted labels (1:COVID, 0:NO COVID). Those labels are then used along the true labels to compute the testing examples average precision score using the sklearn.metrics.average_precision_score function.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The predicted labels classified by the pre-trained model without fine-tuning and compared against the expected labels from the annotated reference dataset to set a base-line precision for the pre-trained model on the reference dataset without fine-tuning" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215889iDFB9FBE5771B667F/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_13-1598982910764.png" alt="Krishna_Anumalasetty_13-1598982910764.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 13. Testing the pre-trained model for performance without fine-tuning on the new reference dataset&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice that the average precision (0.833) is not as high as expected (.90 or 90% or higher) because some instances of the new data have either not been preprocessed in Hounsfield units or haven’t be rotated to a specific orientation. We can now improve the accuracy of the model by fine-tuning it with the full data set.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc49186623"&gt;&lt;/A&gt;Step 3 Fine Tune&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;The train_finetune.sh command executes the fine-tuning mechanism where the original script and its configuration files are adapted to point to the new training data.&lt;/P&gt;
&lt;P&gt;&lt;A target="_blank" name="_Toc49186624"&gt;&lt;/A&gt;Execute new_train_finetune.sh Command&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="align=" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215890i60545732FEEA0D58/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_14-1598982910775.png" alt="Krishna_Anumalasetty_14-1598982910775.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The training set of data from the reference dataset that was previously sliced for training and testing is used to fine-tune the pre-trained model" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215893i3A324761A5DE89BF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_15-1598982910798.png" alt="Krishna_Anumalasetty_15-1598982910798.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 14. Fine-tuning the Clara COVID-19 CT Scan Classification pre-trained model with training set of reference dataset&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the training is complete, you can view the training accuracy, training loss, mean accuracy, and the time it took for the fine-tuning to finish. You can decide to either export the model or continue fine-tuning the model. By default, the resulting model is not automatically saved nor exported. The trained model can be exported using the export.sh command. This produces the frozen graphs necessary for the model to be used by other applications, such as Clara Deploy. The Jupyter notebook sets up the new model so that it can be used to classify the test data.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;A target="_blank" name="_Toc49186626"&gt;&lt;/A&gt;Step 4: Re-test Reclassifying Inference with fine-tuned model&lt;/H2&gt;
&lt;P&gt;The Jupyter notebook creates a new infer.sh command that points to the full data set and to the new fine-tuned model and then executes the command.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The next step is to compute the Average COVID-19 Classification Precision across all testing examples. The Jupyter Notebook then retrieves the probabilities produced by the “finetuned_infer.sh” command and estimates the predicted labels with 1 (high likelihood of infection) or 0 (low likelihood of infection).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Those labels are then used along the true labels to compute the testing examples’ average precision score using the sklearn.metrics.average_precision_score function.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The testing set of data from the reference dataset that was previously sliced for training and testing is used to test the performance of the fine-tuned model. The predicted labels classified by the fine-tuned model compared against the expected labels from the annotated reference dataset establish an improved accuracy and precision over the previously set base-line precision without fine-tuning the model" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215892i146B97A4DE970F46/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_16-1598982910808.png" alt="Krishna_Anumalasetty_16-1598982910808.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 15. Testing the fine-tuned model for performance on the testing slice of data from the new reference dataset&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice that the average precision is now much higher, so the fine-tuning mechanism has succeeded to fine tune the model to account for the peculiarities of the new data.&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H1&gt;&lt;A target="_blank" name="_Toc48730758"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc49186628"&gt;&lt;/A&gt;Deploying the Model&lt;/H1&gt;
&lt;P&gt;After the AI model is exported and checked, the Medical Model Archive MMAR can then be connected into research workflow pipelines. These pipelines contain operators for the various phases of pre-transforms and inference. The results are consumable by a medical imaging ecosystem. (e.g. a DICOM-SR, a secondary capture image with burnt-in results, an HL7 or FHIR message) The imaging pipeline can be constructed using &lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:clara:clara_bootstrap?ncid=progr-21287#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;Clara Deploy SDK&lt;/A&gt;, which uses building blocks called operators. Clara Deploy SDK comes with many ready-made pipelines for getting started.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Understanding and mapping the architecture in any given environment is important to know when constructing this pipeline. For example, simulating the connectivity from an image manager like a PACS&amp;nbsp; by using transformation frameworks to transform the data for the model, is a good first step. Subsequent to the first step, inference tasks follow, and then delivering the outputs in the format accepted by systems and devices at medical institutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="The Clara Deploy application framework can be used to extend the fine-tuned model into a research workflow pipeline" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215891i069250A4893472CF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Krishna_Anumalasetty_17-1598982910809.png" alt="Krishna_Anumalasetty_17-1598982910809.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Figure 16. A sample deployment workflow using the Clara Deploy application framework&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;When thinking about the results, it is important to consider the types of data being produced and the systems that it will be deployed on. For instance, is it a classification result (presence or absence of a disease, like COVID-19), a DICOM-SR for display within a PACS, or a FHIR Observation object. If it is a segmentation result (identifying a lesion or nodule), creating a DICOM Segmentation object may be appropriate. These are examples of the types of objects consumable by the medical imaging ecosystem and the architecture of the environment is important to know when constructing this pipeline.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;A target="_blank" name="_Toc48730759"&gt;&lt;/A&gt;&lt;A target="_blank" name="_Toc49186630"&gt;&lt;/A&gt;Summary&lt;/H1&gt;
&lt;P&gt;We have shown you how you get started with Clara Train on AzureML for radiological CT images, using a pre-trained AI COVID-19 classification model. This is one example that can be built with the software from the NGC Catalog and AzureML. Beyond just building healthcare centric application, you can use the containers, models, and SDKs from the NGC Catalog to build applications across other use cases, such as conversational AI, recommendation systems, and many more.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://ngc.nvidia.com/catalog/resources/nvidia:amlquicklaunch_config_clara_covid19_ctscan_example?ncid=progr-95598#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;Get started today&lt;/A&gt; with &lt;A href="https://ngc.nvidia.com/catalog/containers/nvidia:clara-train-sdk?ncid=progr-50886#cid=ngc01_progr_en-us" target="_blank" rel="noopener"&gt;NVIDIA Clara from the NVIDIA NGC Catalog&lt;/A&gt; on &lt;A title="Azure Machine Learning" href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure ML&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Oct 2020 18:43:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerating-ai-for-covid-19-on-microsoft-azure-machine-learning/ba-p/1536595</guid>
      <dc:creator>Krishna_Anumalasetty</dc:creator>
      <dc:date>2020-10-11T18:43:42Z</dc:date>
    </item>
    <item>
      <title>Dealing with Imbalanced Data in AutoML</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/dealing-with-imbalanced-data-in-automl/ba-p/1625043</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;What is Imbalanced Data?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;For a given classification problem, if the classes/targets within the dataset are not represented equally, then the dataset is said to be imbalanced.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;The classes with a higher representation are called majority classes, while the ones with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;lower representation&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;are called minority classes&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. Sometimes imbalance&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;exists&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;due to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;limited&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;availability of&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;data for certain classes&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, while in other cases it could be&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;attributed to the&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;nature of the dataset. For instance, if there is a spam classification dataset, we’d expect more&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;non-spam&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;emails t&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;han spam emails&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. Similarly, for fraud detection, we would expect the fraudulent transactions to occur&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;only&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rarely.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;It is harder&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;for learning&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;model&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;generate&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;predict&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ions for&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the minority classes because&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;model&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;has access to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;limited training examples representing those classes&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, making it difficult for it to learn to distinguish these classes. Most ML algorithms for classification&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;perform well&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;when all classes have a roughly&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;equal distribution&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and hence it becomes important to address&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the problem of class imbalance.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;How do we detect class imbalance?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;While there is no fixed definition of what constitutes an imbalance,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;generally&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;it is detected&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;using&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;one of the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="14" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Ratio of the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;samples in the least populated&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;class to the overall number of samples in the dataset&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Let’s call this Ratio-1.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="14" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Ratio of the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;samples in the least populated class&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;to the&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;samples in the most populated&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;class&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Let’s call this Ratio-2.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Let’s evaluate both methods using a couple of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;sample&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cases:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1)" data-font="Calibri" data-listid="15" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;For a four&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;class&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;problem with data distributed among labels like this: {'a': 20, 'b': 20, 'c': 20, 'd': 200},&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ratio-1&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;is 7.7%&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, while&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ratio-2 is 10%.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1)" data-font="Calibri" data-listid="15" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;For a four&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;class problem with data distributed among labels like this: {'a': 20, 'b': 200, 'c': 200, 'd': 200},&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ratio-1&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;is 3.2%&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, while Ratio-2 is 10%.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;As seen in the above examples,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ratio-2&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;remains consistent, even if the constitution of the other classes in the overall data changes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Hence&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="none"&gt;we&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;utilize&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ratio-2&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;as the ind&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;cator of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;mbalance in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;AutoML&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and imbalance is detected when this ratio is lower than 20%.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;How to address the&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;problem of Imbalance&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Ideally one would prefer&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;collect&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;more data for the minority class&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(es)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;However, often that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;is quite expensive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;or may not be feasible&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;due to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the nature of the problem&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;e.g.,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;fraud or spam detection).&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Therefore, w&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;e leverage t&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;he following&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;methods for dealing with imbalanced data within AutoML:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="13" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Using&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;w&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;eights for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;c&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;lass&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;b&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;alancing&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;: this feature gets automatically applied in AutoML if it improves performance on a subset of the user’s data (more details in later section&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="13" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Using metrics that are sensitive to imbalance&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;: users can pick relevant metrics based on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-manage-ml-pitfalls#handle-imbalanced-data" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;recommendations&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="13" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="none"&gt;Leveraging the sample weights provided directly by the user&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;via the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;AutoML config&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Additionally, u&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;sers can apply&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;over/under sampling to rebalance the data&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;before feeding it to AutoML&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Methods #1 and #3 will impact the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ML algorithm’s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;cost function&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the same manner&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;covered&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;in the next section), but the difference is that in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;method #1 AutoML will automatically do it for you&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. This feature of Class Balancing using weights is the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;focus&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;of this blog and we&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;shall elucidate that next.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Weights for Class Balancing:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Without actually&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;over&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;-&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;sampling the minority classes or&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;under-sampling&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the majority classes&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, we can simply apply weights to the samples belonging to a class, in the inverse proportion of the number of samples representing that class&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;(Fig 1 elaborates on this calculation)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. The intent is that the underrepresented classes would have a higher weight than the overrepresented ones.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We leverage this method because it allows us to apply balancing without&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;increasing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the size of the dataset, and hence without modifying memory requirements. Additionally, even if the classes are well balanced, applying this method would simply apply uniform weights, and in theory this method could be applied for all datasets.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To understand the impact of applying weights, let’s review the cost function&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;J(&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;θ&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;for a Logistic Regression classifier below. Here&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;m&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;is&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;the number of training samples,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;x and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;y&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are the features and labels respectively&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;θ&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;refers&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to the model&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;parameters&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Arjun-Singh_0-1598961758776.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215807i79CFB7DEB644231B/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Arjun-Singh_0-1598961758776.png" alt="Arjun-Singh_0-1598961758776.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;When we use the Weights for Class Balancing, the above cost function is modified to apply the class weight corresponding to every training sample, as shown below, where c&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;i&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;refers to the class weight for the&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;i&lt;SUP&gt;th&amp;nbsp;&lt;/SUP&gt;training sample.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Arjun-Singh_1-1598961758778.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215808iDC67899FFE042B9F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Arjun-Singh_1-1598961758778.png" alt="Arjun-Singh_1-1598961758778.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Using metrics sensitive to imbalance:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For dealing with class imbalance we&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;recommend using&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;AUC&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;-&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;weighted&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;as&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;the optimization&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;metric&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for AutoML runs&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. There are several benefits, as follow&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="12" aria-setsize="-1" data-aria-posinset="16" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Any macro average will independently calculate the average for every class&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;then take the&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;overall&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;average, thus treating all classes equally; whereas a weighted average will calculate the contribution of every class based on the relative number of samples representing that class, and hence this&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;is more robust to imbalance&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Hence,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;we&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;c&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;h&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;oose&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;a weighted metric.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="12" aria-setsize="-1" data-aria-posinset="16" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Additionally, AUC-weighted&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;threshold invariant&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;since it&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;measures the area under the (ROC)&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;curve&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;over all possible classification thresholds&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;by aggregating them&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="12" aria-setsize="-1" data-aria-posinset="16" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;W&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e also report several other metrics&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sensitive to class imbalance, such as&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;F1, P&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;recision&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, R&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ecall and&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;M&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;atthew’s&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;orrelation&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Coefficient&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Analysis and Experimentation:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;After selecting the solution and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;metric&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, we performed A/B tests&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;on a variety of datasets with the following setup:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Treatment Group:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;using&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;weights for treating&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;class&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;im&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;balanc&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Control Group: with no class balancing&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Significance Level (Type 1 Error probability) preselected as&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;5&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;%&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN class="TextRun  BCX8 SCXW111857110" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun  BCX8 SCXW111857110"&gt;H&lt;SUB&gt;0&amp;nbsp;&lt;/SUB&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;(Null Hypothesis):&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;There is no impact on the performance of AutoML after applying weight-&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;balancing&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Ha&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;(Alternative Hypothesis): Weight balancing improves the performance of AutoML&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559685&amp;quot;:1080,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="%2." data-font="" data-listid="6" aria-setsize="-1" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;One tailed&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;t-&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;tests were performed&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;by leveraging the sample statistics like sample standard deviation from multiple&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;runs&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;performed on the same datasets&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Extensive A/B testing with this setup demonstrated t&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;hat we could leverage subsampled data&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;reliably to&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;d&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;etermine the effectiveness of our class balancing sol&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ution.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;R&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;unning A/B tests on subsampled data rather than the full data enabled us to reduce the overall execution time.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;se tests utilize&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sample statistics&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;computed using&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;binomial distribution&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and some heuristics&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. This helps us&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;decide if balancing the classes using weights would improve the performance&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and the complete flow is described in the graphic below.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Additionally,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;AutoML’s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-features#data-guardrails" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;G&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;uardrails&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;feature&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;informs the user if their data exhibits imbalance and if&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the class&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;balancing solution was applied&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Arjun-Singh_2-1598961758786.png" style="width: 831px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/215809i0E3E0E2325ADA3D5/image-dimensions/831x447?v=v2" width="831" height="447" role="button" title="Arjun-Singh_2-1598961758786.png" alt="Arjun-Singh_2-1598961758786.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H6&gt;&lt;FONT color="#808080"&gt;Figure 1: Flow chart describing the handling of imbalanced data within AutoML&lt;BR /&gt;&lt;BR /&gt;&lt;/FONT&gt;&lt;/H6&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If you’re getting started today with Microsoft Azure’s Automated Machine Learning, here are a couple of helpful links:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/automatedml/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://azure.microsoft.com/en-us/services/machine-learning/automatedml/&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml#how-automated-ml-works" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml#how-automated-ml-works&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Contributors:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Arjun Singh – Data &amp;amp; Applied Scientist II&lt;/P&gt;
&lt;P&gt;Anup Shirgaonkar – Principal Data &amp;amp; Applied Scientist&lt;/P&gt;</description>
      <pubDate>Tue, 01 Sep 2020 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/dealing-with-imbalanced-data-in-automl/ba-p/1625043</guid>
      <dc:creator>Arjun-Singh</dc:creator>
      <dc:date>2020-09-01T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Improve remote learning with speech-enabled apps powered by Azure Cognitive Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/improve-remote-learning-with-speech-enabled-apps-powered-by/ba-p/1612807</link>
      <description>&lt;H1&gt;Improve remote learning with speech-enabled apps powered by Azure Cognitive Services&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post was co-authored by Melissa Ma, Yueying Liu, Anny Dow and Sheng Zhao&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Online learning has grown rapidly over the last couple of months as schools and organizations adapt to new ways of connecting and methods of education. Speech technology can play a significant role in making distance learning more engaging and accessible to students of all backgrounds. With Azure Cognitive Services, developers can quickly add speech capabilities to applications, bringing online learning to life.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;Enhancing language fluency with pronunciation assessment&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;One key element in language learning is improving pronunciation skills. For new language learners, practicing pronunciation and getting &lt;/SPAN&gt;&lt;SPAN&gt;timely feedback&lt;/SPAN&gt;&lt;SPAN&gt; is essential to becoming a more fluent speaker. In the current environment, online language learning and the ability to practice anytime, anywhere, has become even more important. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;At the Build conference in May, we &lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/meeting-the-challenges-of-today-and-tomorrow-with-azure-ai/" target="_blank" rel="noopener"&gt;announced&lt;/A&gt;&lt;SPAN&gt; the preview of the pronunciation assessment capability, powered by &lt;/SPAN&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/" target="_blank" rel="noopener"&gt;Speech to Text.&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The pronunciation assessment capability evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio, allowing users to benefit from:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Highly accurate evaluations &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;- Provides consistent and accurate evaluation results using a machine learning-based approach that correlates highly with speech assessments conducted by native experts. The pronunciation assessment model was trained with 100,000+ hours of speech data from native English speakers and is highly robust. It assesses three dimensions of pronunciation: accuracy, fluency and completeness. Pronunciation assessment can provide evaluations at multiple levels of granularity, returning accuracy scores for specific phonemes, words, sentences, or even whole articles. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Ability to account for inserted and omitted words – &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Enables rich &lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-speech-to-text#pronunciation-assessment-parameters" target="_blank" rel="noopener"&gt;configuration parameters&lt;/A&gt;&lt;SPAN&gt; to support flexibility in using the API.&lt;/SPAN&gt;&lt;SPAN&gt; Using NLP techniques and &lt;/SPAN&gt;EnableMiscue setting&lt;SPAN&gt;, pronunciation assessment can detect errors such as extra, missing, or repeated words—when compared to reference text—to assist in more accurate scoring. This is particularly useful for longer paragraphs of text.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;STRONG&gt;Real-time streaming - &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Supports &lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-speech-to-text#chunked-transfer" target="_blank" rel="noopener"&gt;streaming upload&lt;/A&gt;&lt;SPAN&gt; on audio files for immediate feedback. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation. Online learning solution providers or educators can use the capability to evaluate pronunciation of multiple speakers in real-time.&amp;nbsp;Pronunciation assessment currently supports the English language.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=cBE8CUHOFHQ" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/cBE8CUHOFHQ/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Educational organizations, like the Tomorrow Advancing Life (TAL) Education Group, are already building applications using pronunciation assessment to help students practice language learning remotely.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;“Effectively and efficiently teaching accurate pronunciation to students of different levels is a big challenge, both in class and outside of class. The Speech service’s pronunciation assessment capability provides a powerful solution to address this challenge. We’ve been highly impressed by the robustness of pronunciation assessment and its ability to deal with noisy environments, and how well it correlates with pronunciation evaluations conducted by our teachers.” &lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;- &lt;STRONG&gt;Xiangyu&amp;nbsp;Hu&lt;/STRONG&gt;, AI Scientist of Tomorrow Advancing Life (TAL) Education Group&amp;nbsp;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Learn how you can get started with the pronunciation assessment using our&lt;A href="https://www.youtube.com/watch?v=zFlwm7N4Awc" target="_blank" rel="noopener"&gt;&amp;nbsp;tutorial video&lt;/A&gt; and download source code from &lt;A href="https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/CSharp/WPF" target="_blank" rel="noopener"&gt;Github&lt;/A&gt; to try out.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=zFlwm7N4Awc" align="center" size="large" width="600" height="450" uploading="false" thumbnail="https://i.ytimg.com/vi/zFlwm7N4Awc/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Developing interactive courses with Text to Speech&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another way that Speech technology can support better online learning experiences is through &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;Text to Speech&lt;/A&gt;, a Speech service feature that converts text to lifelike speech. Educators can create interactive materials with highly expressive and humanlike voices using Neural Text to Speech (Neural TTS), now available in 36 voices with 31 languages. (Learn about our most recent languages &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN&gt;.)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Neural TTS, developers can add natural-sounding voice to learning materials, for scenarios like slide narration. Neural TTS can also be used for reading aloud any content, facilitating new ways for students to interact with material as well as increasing accessibility for students with learning differences. Educational organizations can also use Neural TTS to create AI-powered virtual “teachers” that interact with students to make online courses more engaging.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="NTTS-edge.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/214755i516BAB00F228EECA/image-size/large?v=v2&amp;amp;px=999" role="button" title="NTTS-edge.gif" alt="Experience the Neural Voices with the new Edge browser" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Experience the Neural Voices with the new Edge browser&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt; capability, online learning solution providers can further create interactive learning experiences for their students in a voice that represents their brand, or develop unique voices for different characters. For example, Duolingo, one of the world’s most popular language learning apps, is creating unique voices for different characters used in the lessons. &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;SSML&lt;/A&gt; or the &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;Audio Content Creation tool&lt;/A&gt;, users can further finetune audio characteristics like speaking rate, pitch, and pronunciation to fit their scenarios—no code required. Neural TTS also supports different speaking styles—like cheerfulness and empathy—making it easier to bring audiobooks to life. Recently we have just added &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles" target="_blank" rel="noopener"&gt;10 new voice styles&lt;/A&gt;, available in Chinese (Xiaoxiao voice) and will be expanded to other languages. With these new styles, online education solution providers can create more engaging interactive courses that express rich emotions.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To learn more about Audio Content Creation, watch the &lt;A href="https://www.youtube.com/watch?v=O1wIJ7mts_w" target="_blank" rel="noopener"&gt;video tutorial&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=O1wIJ7mts_w" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/O1wIJ7mts_w/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;To learn more and get started adding speech to your educational applications, check out our resources below:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Pronunciation Assessment&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try out our &lt;A href="https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/CSharp/WPF" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more with our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-to-text" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Check out easy-to-deploy &lt;A href="https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/CSharp/WPF" target="_blank" rel="noopener"&gt;samples&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Watch the &lt;A href="https://www.youtube.com/watch?v=cBE8CUHOFHQ" target="_blank" rel="noopener"&gt;video introduction&lt;/A&gt; and &lt;A href="https://youtu.be/zFlwm7N4Awc" target="_blank" rel="noopener"&gt;video tutorial&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Text to Speech&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Check out our &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more with our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Follow the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=script%2Cwindowsinstall&amp;amp;pivots=programming-language-csharp" target="_blank" rel="noopener"&gt;QuickStart&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Learn more about responsible deployment of &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/concepts-gating-overview" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.youtube.com/watch?v=O1wIJ7mts_w" target="_blank" rel="noopener"&gt;Video tutorial&lt;/A&gt; for the Audio Content Creation tool&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Aug 2020 08:20:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/improve-remote-learning-with-speech-enabled-apps-powered-by/ba-p/1612807</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-08-28T08:20:30Z</dc:date>
    </item>
    <item>
      <title>How Language Understanding enables voice commands for Dictate in Word</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-language-understanding-enables-voice-commands-for-dictate-in/ba-p/1609357</link>
      <description>&lt;P&gt;People are interacting with technology using voice input more than ever before. Dictation in productivity apps such as Word empower people to conquer the blank page. Dictating is a fast and easy way to get your thoughts on the page during brainstorming, outlining, and authoring content.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, &lt;A href="https://www.microsoft.com/en-us/microsoft-365/blog/2020/08/25/microsoft-365-transcription-voice-commands-word" target="_self"&gt;Office announced voice commanding on web and mobile&lt;/A&gt; as the next step to make dictation more powerful:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&lt;IFRAME src="https://www.microsoft.com/en-us/videoplayer/embed/RE4DGjj" style="width: 500px; height: 300px;" title="Dictate with voice commands"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Prior to having voice commands, dictation only allowed people to add basic text with speech-to-text and punctuation with phrases like “new line” or “period”. Now, without leaving dictation or having to switch to keyboard and mouse, you can seamlessly use voice commands to accomplish tasks such as correcting text, light formatting, and making lists using natural language. Phrases like “delete that”, “bold last word”, “start list” are now among many new commands and more will be added regularly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When the Office dictation team was looking for a good way to understand the natural spoken language commands, they reached out to the &lt;A href="https://luis.ai" target="_self"&gt;Azure Language Understanding&lt;/A&gt; team. Following that, an exciting new collaboration formed. Language Understanding in Azure Cognitive Services enables the team to build custom natural language models that interpret a users’ natural speech instead of using a strict command language- all without having to be a machine learning expert. With Language Understanding, you can create custom language models with UI-experience with confidence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is just one of the &lt;A href="https://azure.microsoft.com/en-us/blog/extending-the-power-of-azure-ai-to-microsoft-365-users/" target="_self"&gt;many examples&lt;/A&gt; of how Microsoft products are powered by cutting edge AI technology in Azure AI. For example, suggested replies in Outlook and PowerPoint Designer each rely on Azure Machine Learning to make recommendations on quick replies and recommended design layouts, respectively. Additionally, Microsoft Teams uses Speech for live captioning of meetings, while Xbox uses Personalizer to make personalized recommendations for Xbox gamers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any developer can extend apps with rich AI-powered capabilities irrespective of machine learning expertise. Azure Cognitive Services offers pre-built AI models and simple customization capabilities that help you infuse capabilities of vision, speech, language and decision.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Aug 2020 21:17:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-language-understanding-enables-voice-commands-for-dictate-in/ba-p/1609357</guid>
      <dc:creator>AliciaEP</dc:creator>
      <dc:date>2020-08-25T21:17:33Z</dc:date>
    </item>
    <item>
      <title>Train and Score Hundreds of Thousands of Models in Parallel</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/train-and-score-hundreds-of-thousands-of-models-in-parallel/ba-p/1547960</link>
      <description>&lt;H2&gt;&lt;EM&gt;Abstract&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/" target="_self"&gt;Azure Machine Learning service&lt;/A&gt;, the training and scoring of hundreds of thousands of models with large amounts of data can be completed efficiently leveraging pipelines where certain steps like model training and model scoring run in parallel on large scale out compute clusters. In order to help organizations get a head start on building such pipelines, the &lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;&lt;EM&gt;Many Models Solution Accelerator&lt;/EM&gt;&lt;/A&gt; has been created. The Many Models Solution Accelerator provides two primary examples, one using custom machine learning and the other using AutoML. &lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_self"&gt;Give it a try today!&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;EM&gt;Executive Overview&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Many executives are looking to Machine Learning to improve their business. With the reliance on a more digital world, the amount of data generated is increasing faster than ever. Further, companies are purchasing 3&lt;SUP&gt;rd&lt;/SUP&gt; party datasets to combine with internal data to gain further insight and make better predictions. In order to make better predictions, sophisticated machine learning models are being built that leverages this large pool of data. Further, as companies are expanding to do business in a variety of markets and environments, general machine learning model no longer suffice, instead, more specific machine learning models are needed. In this case, "general machine learning models" refers to the granularity for which this model was built, for example, building a demand forecast model for a product at the country level versus at the city level which would be a "specific machine learning model."&amp;nbsp; Building more specific machine learning models can easily results in building hundreds of thousands of specific models instead of a handful of general models. &amp;nbsp;Combining large datasets with the need of building hundreds of thousands of more specific machine learning models is not a trivial task. Doing so requires very large compute power. In fact, this task can greatly benefit from parallel compute power where multiple compute instances are working simultaneously to build the machine learning models in parallel. Once those models are trained, leveraging them to score large amounts of data presents the same problem characteristics which again, leveraging a compute cluster where multiple compute instances are making predictions using the machine learning models simultaneously can greatly reduce the time required to do so. With the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/" target="_self"&gt;Azure Machine Learning service&lt;/A&gt;, the training and scoring of hundreds of thousands of models with large amounts of data can be completed efficiently leveraging pipelines where certain steps like model training and model scoring run in parallel on large scale out compute clusters. In order to help organizations get a head start on building such pipelines, the &lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;&lt;EM&gt;Many Models Solution Accelerator&lt;/EM&gt;&lt;/A&gt; has been created. The Many Models Solution Accelerator provides two primary examples, one using custom machine learning and the other using AutoML.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Azure Machine Learning, AutoML automates the building of the most common categories of Machine Learning models in a very robust and sophisticated manner. For example, a very common machine learning problem is demand forecasting. Having a more accurate demand forecast can increase revenues and reduce waste. Traditionally, many statistical methods have been used to do just this. However, more modern techniques leverage Machine Learning including Deep Learning techniques to provide a more accurate demand forecast. Further, the demand forecast can be improved by moving from forecasting a broader scope (general machine learning model) to forecasting a more granular scope (specific machine learning model).&amp;nbsp; Doing so means, for example, instead of building one forecast for each Product at the country level, building a forecast for each product at the city. Moving to more specific models results in building hundreds of thousands of forecasts using large amounts of data which as discussed above can be solved using the &lt;EM&gt;&lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;Many Models Solution Accelerator&lt;/A&gt;.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;EM&gt;Technical Overview&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As data scientist move from building a handful of general machine learning models to hundreds of thousands of more specific machine learning models (i.e. geography or product scope), the need to perform the model training and model scoring tasks require parallel compute power to finish in a timely manner. In the Azure Machine Learning service SDK, this is accomplished using &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py" target="_blank" rel="noopener"&gt;Pipelines&lt;/A&gt; and specifically a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunStep&lt;/A&gt; which runs on a multi node&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;Compute Clusters&lt;/A&gt;. The data scientist provides the ParallelRunStep a custom script, an input dataset, a compute cluster and the amount of parallelism they would like. This concept can be applied to a custom python script and to &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train" target="_blank" rel="noopener"&gt;Automated Machine Learning (AutoML)&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train" target="_blank" rel="noopener"&gt;Automated Machine Learning (AutoML)&lt;/A&gt; uses over ten algorithms (including deep learning algorithms) with varying hyperparameters to build Classification, Regression and Forecasting models. Further, &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py" target="_blank" rel="noopener"&gt;Pipelines&lt;/A&gt; automate the invocation of AutoML across multiple nodes using&amp;nbsp; &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunStep&lt;/A&gt; to train the models in parallel as well as to &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-batch-scoring-classification" target="_blank" rel="noopener"&gt;batch score new data&lt;/A&gt;. Pipelines can be scheduled to run within Azure Machine Learning or invoked using their REST endpoint from various Azure services (i.e. Azure Data Factory, Azure DevOps, Azure Functions, Azure Logic Apps, etc). When invoked, the Parallel Pipelines run on &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;Compute Clusters&lt;/A&gt; within Azure Machine Learning. The Compute clusters can be scaled up and out to perform the training and scoring. Each node in a compute clusters can be have Terabytes of RAM, over a 100 cores, and multiple GPUs. Finally, the scored data can be stored in an a datastore in Azure, such as Azure DataLake Gen 2, and then copied to a specific location for an application to consume the results.&lt;/P&gt;
&lt;P&gt;In order to provide a jump start in leveraging Pipelines with the new ParallelRunStep, the &lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;&lt;EM&gt;Many Models Solution Accelerator&lt;/EM&gt;&lt;/A&gt; has been created. This solution accelerator showcases both a custom python script and an AutoML script.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;EM&gt;Major Components&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The main components of the Many Models Solution Accelerator includes an &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace" target="_blank" rel="noopener"&gt;Azure Machine Learning Workspace&lt;/A&gt;, a &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines" target="_self"&gt;Pipeline&lt;/A&gt;, a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunStep&lt;/A&gt;, a &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target" target="_blank" rel="noopener"&gt;Compute Target&lt;/A&gt;, a &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-data" target="_blank" rel="noopener"&gt;Datastore&lt;/A&gt;, and a Python Script File as depicted in Figure 1, below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Sam_Istephan_0-1595858410872.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/208186i774F39D8DF91F0E4/image-size/large?v=v2&amp;amp;px=999" role="button" title="Sam_Istephan_0-1595858410872.png" alt="Sam_Istephan_0-1595858410872.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Figure 1. The architecture of a Pipeline with a ParallelRunStep&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For an overview of getting started with Azure Machine Learning service, please see the blog article &lt;A href="https://mlonazure.com/gettingstartedwithaml/" target="_blank" rel="noopener"&gt;MLonAzure: Getting Started with Azure Machine Learning service for the Data Scientist&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;For an overview of Pipelines, please see blog article, &lt;A href="https://mlonazure.com/pipelines/" target="_blank" rel="noopener"&gt;MLonAzure: Azure Machine Learning service Pipelines&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;EM&gt;Major Steps&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;1. Prerequisites&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An Azure Subscription along with an Azure Machine Learning Workspace are needed to get started. To get started with Azure Machine Learning, see &lt;A href="https://mlonazure.com/gettingstartedwithaml/" target="_blank" rel="noopener"&gt;Getting Started with Azure Machine Learning services&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Within the Azure Machine Learning, a Compute Instance needs to be created to serve as the Data Scientist’s workstation. Using the Compute Instance, clone the &lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;Many Models Solution Accelerator&lt;/A&gt; Github repository.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;2.&amp;nbsp; Data Prep&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data needs to be split into multiple files (.csv or .parquet) for each group that a model is to be created for. Each file must contain one or more entire time series for the give group.&lt;/LI&gt;
&lt;LI&gt;The data must be placed in Azure Storage (e.g. ADL Gen 2, Blob Storage). The storage will then be registered as a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py" target="_blank" rel="noopener"&gt;Datastore&lt;/A&gt; from which two &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py" target="_blank" rel="noopener"&gt;FileDatasets&lt;/A&gt; will be registered, one pointing to the folder containing the training data and the other to the folder containing the data to be scored.&lt;/LI&gt;
&lt;LI&gt;For example, to build a forecast model for each brand within a store, the training sales data would be split to create files StoreXXXX_BrandXXXX.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;3. Model Training&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The solution accelerator showcases model training with a custom python script and with AutoML which are orchestrated using a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py" target="_blank" rel="noopener"&gt;Pipeline&lt;/A&gt;. Please see &lt;A href="https://github.com/microsoft/solution-accelerator-many-models/tree/master/Custom_Script" target="_blank" rel="noopener"&gt;solution-accelerator-manymodels-customscript&lt;/A&gt; and &lt;A href="https://github.com/microsoft/solution-accelerator-many-models/tree/master/Automated_ML" target="_blank" rel="noopener"&gt;solution-accelerator-manymodels-AutoML&lt;/A&gt;. Putting it all together results in the architecture depicted in Figure 2, below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Sam_Istephan_2-1595858410885.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/208187iB9C9AEC4D7A878C6/image-size/large?v=v2&amp;amp;px=999" role="button" title="Sam_Istephan_2-1595858410885.png" alt="Sam_Istephan_2-1595858410885.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Figure 2: Solution Accelerator Model Training&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;3. a. Pipeline&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The solution accelerator leverages the &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py" target="_blank" rel="noopener"&gt;Pipeline&lt;/A&gt; object to train the model. Specifically, a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunStep&lt;/A&gt; is used which requires a configuration parameters, &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunConfig&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunConfig&lt;/A&gt; has many parameters, below are the typical ones used for the Many Models Solution Accelerator. For a complete list of &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;ParallelRunConfig&lt;/A&gt; parameters, please see the &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;&amp;nbsp;ParallelRunConfig Class&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Parameter&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;Explanation&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;environment&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;Provides the configurations for the Python Environment&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Entry_script&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;This is a Python file (.py extension only) that will run in parallel. Note that the Many Models Solution Accelerator contains a custom Entry_script that leverages AutoML and one that leverages custom code.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Compute_target&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;The AML ComputeCluster to run the step on.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Node_count&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;The number of nodes to use within the training cluster. Scale this number to a higher number to increase parallelism.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Process_count_per_node&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;The number of cores that will be used within each node&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;mini_batch_size&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;For FileDatasets it’s the number of files that are used at a time, for Tabular Datasets it’s the number data size that will be processed at a given time.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216"&gt;
&lt;P&gt;Run_inovcation_timeout&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408"&gt;
&lt;P&gt;The overall allowed time for the parallel run step&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H4&gt;3. b. Training script with AutoML&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The solution accelerator showcases using &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast" target="_self"&gt;AutoML Forecasting&lt;/A&gt;. AutoML has many parameters, below are the typical ones used for doing a Forecasting task within the Many Models Solution Accelerator. For a complete list of AutoML Config parameters, please see the &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py" target="_blank" rel="noopener"&gt;AutoMLConfig Class&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="applescript"&gt;automl_settings = {
    "task" : 'forecasting',
    "primary_metric" : 'normalized_root_mean_squared_error',
    "iteration_timeout_minutes" : 10, 
    "iterations" : 15,
    "experiment_timeout_hours" : 1,
    "label_column_name" : 'Quantity',
    "n_cross_validations" : 3,
    "verbosity" : logging.INFO, 
    "debug_log": 'DebugFileName.txt',
    "time_column_name": 'WeekStarting',
    "max_horizon" : 6,
    "group_column_names": ['Store', 'Brand'],
    "grain_column_names": ['Store', 'Brand']
}&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;Parameter&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;Explanation&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="57px"&gt;
&lt;P&gt;task&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="57px"&gt;
&lt;P&gt;The type of AutoML Task: Classification, Regression, Forecasting&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;primary_metric&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;The metric that AutoML should optimize based on&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;Iteration_timeout_minutes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;How long each of the number of “iterations” can run for&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="57px"&gt;
&lt;P&gt;Iterations&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="57px"&gt;
&lt;P&gt;Number of models that should be tried (combinations of various Algorithms + various Hyperparameters)&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="84px"&gt;
&lt;P&gt;Experiment_timeout_hours&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="84px"&gt;
&lt;P&gt;How long the overall AutoML Experiment can take. Note: The experiment may might timeout before all iterations are complete.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;label_column_name&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;The column that is being predicted&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="57px"&gt;
&lt;P&gt;n_cross_validations&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="57px"&gt;
&lt;P&gt;Number of cross validations that should take place within the training dataset&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;verbosity&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;Log details&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;debug_log&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;Location for the debug log&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="57px"&gt;
&lt;P&gt;time_column_name&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="57px"&gt;
&lt;P&gt;The name of the Time column, note that the training dataset can have multiple time series&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;max_horizon&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;How far how the forecast will go&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="57px"&gt;
&lt;P&gt;group_column_names&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="57px"&gt;
&lt;P&gt;The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="216px" height="30px"&gt;
&lt;P&gt;grain_column_names&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="408px" height="30px"&gt;
&lt;P&gt;The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;4. Model Forecasting&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The solution accelerator showcases model forecasting with a custom python script and with AutoML which are orchestrated using a &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py" target="_blank" rel="noopener"&gt;Pipeline&lt;/A&gt;. Please see &lt;A href="https://github.com/microsoft/solution-accelerator-many-models/tree/master/Custom_Script" target="_blank" rel="noopener"&gt;solution-accelerator-manymodels-customscript&lt;/A&gt; and &lt;A href="https://github.com/microsoft/solution-accelerator-many-models/tree/master/Automated_ML" target="_blank" rel="noopener"&gt;solution-accelerator-manymodels-AutoML&lt;/A&gt;. Putting it all together results in the architecture depicted in Figure 3, below.&lt;/P&gt;
&lt;DIV id="tinyMceEditorSam_Istephan_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorSam_Istephan_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Sam_Istephan_4-1595858410900.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/208189i5EEB941EF199227F/image-size/large?v=v2&amp;amp;px=999" role="button" title="Sam_Istephan_4-1595858410900.png" alt="Sam_Istephan_4-1595858410900.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;Figure 3: Solution Accelerator Model Scoring&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;5. Automation&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In order to automate the solution, the training and scoring pipelines must be published and a PipelineEndPoint must be created. Once that’s done, the &lt;A href="https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineendpoint?view=azure-ml-py" target="_blank" rel="noopener"&gt;PipelineEndpoint&lt;/A&gt; can then be invoked from &lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/" target="_blank" rel="noopener"&gt;Azure Data Factory&lt;/A&gt;. Specifically, the &lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/transform-data-machine-learning-service" target="_blank" rel="noopener"&gt;Azure Machine Learning Pipeline Activity&lt;/A&gt; is used. Note that the training and scoring pipelines can be collapsed into one pipelines if the training and scoring occur consecutively. &amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;EM&gt;Next Steps&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/" target="_self"&gt;Azure Machine Learning Documentation&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/solution-accelerator-many-models" target="_blank" rel="noopener"&gt;Many Models Solution Accelerator&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://channel9.msdn.com/Shows/Docs-AI/Building-Large-Scale-Machine-Learning-Models-using-Azure-Machine-Learning" target="_self"&gt;Many Models Solution Accelerator Video&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/data-factory/transform-data-machine-learning-service" target="_blank" rel="noopener"&gt;Azure Data Factory: Azure Machine Learning Pipeline Activity&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/mlonazure/AzureMachineLearning/tree/master/Pipeline%20Python%20and%20R" target="_blank" rel="noopener"&gt;MLOnAzure GitHub: Getting started with Pipelines&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mlonazure.com/gettingstartedwithaml/" target="_blank" rel="noopener"&gt;MLonAzure Blog: Getting Started with Azure Machine Learning for the Data Scientist&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mlonazure.com/pipelines" target="_blank" rel="noopener"&gt;MLonAzure Blog: Azure Machine Learning service Pipelines&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mlonazure.com/pipelines-parallelrunstep/" target="_self"&gt;MLonAzureBlog: Azure Machine Learning Pipeline - ParallelRunStep&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://youtu.be/wnB-EpjALIQ" target="_self"&gt;MLonAzureBlog: Video Walkthrough of ParallelRunStep&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/groups/12405375/" target="_self"&gt;LinkedIn Group: Machine Learning on Azure&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Aug 2020 16:33:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/train-and-score-hundreds-of-thousands-of-models-in-parallel/ba-p/1547960</guid>
      <dc:creator>Sam_Istephan</dc:creator>
      <dc:date>2020-08-05T16:33:23Z</dc:date>
    </item>
    <item>
      <title>Re-ranking Cognitive Search results with Machine Learning for better search relevance</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/re-ranking-cognitive-search-results-with-machine-learning-for/ba-p/1542431</link>
      <description>&lt;P&gt;Are you looking for ways to fine-tune your model relevance? Sometimes developers create a customized ranking model to re-rank the results returned by Azure Cognitive Search. This allows them to use application-specific context as part of that model. To help facilitate this, Azure Cognitive Search is introducing a new query parameter called &lt;A title="featuresMode Parameter" href="https://docs.microsoft.com/en-us/rest/api/searchservice/preview-api/search-documents#featuresmodedisabled--enabled-optional-preview" target="_blank" rel="noopener"&gt;featuresMode&lt;/A&gt;. When this parameter is set, the response will contain information used to compute the search score of retrieved documents, which can be leveraged to train a re-ranking model using a Machine Learning approach.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have created a new &lt;A href="https://github.com/Azure-Samples/search-ranking-tutorial" target="_blank" rel="noopener"&gt;sample and tutorial&lt;/A&gt; that walks you through the learning to rank process end-to-end, with steps for designing, training, testing, and consuming a ranking model. The tutorial shows you how to extract features using the &lt;A title="featuresMode Parameter" href="https://docs.microsoft.com/en-us/rest/api/searchservice/preview-api/search-documents#featuresmodedisabled--enabled-optional-preview" target="_blank" rel="noopener"&gt;featuresMode&lt;/A&gt; parameter and train a ranking model to increase total search relevance as measured by the offline &lt;A href="https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG" target="_blank" rel="noopener"&gt;NDCG metric&lt;/A&gt;. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For customers who are less familiar with machine learning, a learn-to-rank method re-ranks top results based on a machine learning model. The re-ranking process can incorporate clickthrough data or domain expertise as a reflection of what is truly relevant to users. The is a visualization of the components of a learn-to-rank method used in the tutorial.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorLuis Cabrera-Cordon_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="L2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/207548i60912CD917A7C5A7/image-size/large?v=v2&amp;amp;px=999" role="button" title="L2.png" alt="L2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="107"&gt;
&lt;P&gt;&lt;STRONG&gt;Legend&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="516"&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107"&gt;
&lt;P&gt;Data&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="516"&gt;
&lt;P&gt;The articles and search statistics that reside in Azure Blob storage.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107"&gt;
&lt;P&gt;Search Index&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="516"&gt;
&lt;P&gt;Azure Cognitive Search ingests the data into a search index.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107"&gt;
&lt;P&gt;Re-ranker&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="516"&gt;
&lt;P&gt;Queries against the index produce scores and scoring features that are used to train a machine learning model based on labels derived from clickthrough data.&amp;nbsp; After the model is trained, you can use it to re-rank your documents.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107"&gt;
&lt;P&gt;Judgement labels&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="516"&gt;
&lt;P&gt;To train the machine learning model, you need to have labeled data that contains signal for what documents are most relevant for different queries. One way to do this is to collect clickthrough data to understand which documents are most popular. Another mechanism may be to find human judges to label the most relevant documents.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The &lt;A href="https://docs.microsoft.com/en-us/rest/api/searchservice/preview-api/search-documents#featuresmodedisabled--enabled-optional-preview" target="_blank" rel="noopener"&gt;featuresMode&lt;/A&gt; parameter is currently in preview and can be accessed through the Azure Cognitive Search REST APIs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Request&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=[api-version]&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&amp;nbsp; Content-Type: application/json&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&amp;nbsp; api-key: [admin or query key]&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Request Body&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;{&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "search": ".net core",&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&amp;nbsp;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp; "featuresMode": "enabled",&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "select": "title_en_us, description_en_us",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "searchFields": "body_en_us,description_en_us,title_en_us,apiNames,urlPath,searchTerms, keyPhrases_en_us",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "scoringStatistics": "global"&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Response&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;{&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "value": [&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "@search.score": document_score (if a text query was provided),&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "@search.highlights": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; field_name: [ subset of text, ... ],&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "@search.features": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "field_name_1": {&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;"uniqueTokenMatches": 1.0,&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "similarityScore": 0.29541412,&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "termFrequency": 2&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "field_name_2": {&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "uniqueTokenMatches": 3.0,&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "similarityScore": 1.75345345,&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#008080"&gt;&lt;STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "termFrequency": 6&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/P&gt;
&lt;P&gt;&amp;nbsp; ]&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;If you are interested in this new capability, contact us at &lt;A href="mailto:azuresearchrelevance@microsoft.com" target="_blank" rel="noopener"&gt;azuresearchrelevance@microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://github.com/Azure-Samples/search-ranking-tutorial" target="_blank" rel="noopener"&gt;Search Ranking Tutorial Github&lt;/A&gt;&amp;nbsp; &lt;BR /&gt;&lt;A href="https://docs.microsoft.com/en-us/rest/api/searchservice/preview-api/search-documents#featuresmodedisabled--enabled-optional-preview" target="_blank" rel="noopener"&gt;FeaturesMode REST API Reference&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jul 2020 15:44:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/re-ranking-cognitive-search-results-with-machine-learning-for/ba-p/1542431</guid>
      <dc:creator>Luis Cabrera-Cordon</dc:creator>
      <dc:date>2020-07-29T15:44:08Z</dc:date>
    </item>
    <item>
      <title>Azure Machine Learning studio - a web interface for managing the machine learning lifecycle</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-machine-learning-studio-a-web-interface-for-managing-the/ba-p/1521780</link>
      <description>&lt;P&gt;Machine learning is a complex and task heavy art, be it cleaning data, creating new models, deploying models, managing a model repository, or automating the entire CI/CD pipeline for machine learning.&lt;/P&gt;
&lt;P&gt;As &lt;A href="https://azure.microsoft.com/en-us/blog/companies-of-all-sizes-tackle-real-business-problems-with-azure-ai/" target="_blank" rel="noopener"&gt;more companies embark on the journey of machine learning&lt;/A&gt; in everything they do, &lt;A href="https://azure.microsoft.com/services/machine-learning-studio/" target="_blank" rel="noopener"&gt;Microsoft Azure Machine Learning&lt;/A&gt;&amp;nbsp;provides them with enterprise-grade capabilities to accelerate the machine learning lifecycle and empowers developers and data scientists of all skill levels to build, train, deploy, and manage models responsibly and at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning studio&lt;/STRONG&gt; is the web user interface of &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;, enabling data scientists to complete their end-to-end machine learning lifecycle, from cleaning and labeling data, to training and deploying models using cloud scalable compute, in a single enterprise-ready tool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to announce that&amp;nbsp;&lt;STRONG&gt;Azure Machine Learning studio&lt;/STRONG&gt; &lt;STRONG&gt;is now generally available&lt;/STRONG&gt; worldwide, supporting 18 languages and over 30 locales!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning studio caters to all skill levels, with authoring tools such as the automated machine learning user interface to train and deploy models in a click of a button, and the drag and drop designer to create ML pipelines using a visual interface. All resources and assets created during the ML process – notebooks, models, pipelines, are all available for team collaboration under one roof.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this release, studio is even more comprehensive and easy to use&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Notebooks&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;:&amp;nbsp;Intellisense, checkpoints, tabs, editing without compute, updated file operations, improved kernel reliability, and many more.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Read more about Azure machine learning studio notebooks&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;here&lt;/SPAN&gt;&lt;/A&gt;.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Notebooks.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205273i91A02948BEB844C7/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Notebooks.png" alt="Notebooks are integrated into Azure Machine Learning studio" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Notebooks are integrated into Azure Machine Learning studio&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-experiments#view-the-experiment-in-your-workspace-in-azure-machine-learning-studio" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Experimentation&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;: Compare multiple runs graphically using an improved charting visualization experience including chart smoothing, displaying aggregated data and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Run history.png" style="width: 984px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205275i0FB9260E15E4CFFC/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Run history.png" alt="Charts and metrics for tracking and analyzing runs" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Charts and metrics for tracking and analyzing runs&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;Security&lt;/STRONG&gt;: Granular&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Role Based Access Controls&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;(RBAC) are now supported (in preview) out of the box for the most common actions in your studio workspace. Specific actions or controls will now be hidden based on your role assignment automatically as setup by your IT Admins.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Compute&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;:&amp;nbsp;Compute instance has tons of improvements in quality, reliability, availability, provisioning latency, and user experience:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;New enterprise readiness and administrator capabilities:&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;REST API and CLI support to help automate creation and management of compute instance&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;ARM template support for provisioning compute instance with sample template documented and downloadable from UI&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Ability for admin to create compute instance on behalf of other users and assign to them through ARM template and REST API. Data scientists&amp;nbsp;do not&amp;nbsp;need to have create/delete RBAC permissions&amp;nbsp;and can&amp;nbsp;access&amp;nbsp;Jupyter,&amp;nbsp;JupyterLab, RStudio, use compute instance from integrated notebooks, and&amp;nbsp;can&amp;nbsp;start/stop/restart compute instances (this is in preview).&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Validating user subnet NSG rules&amp;nbsp;in virtual network for improved compute instance creation.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Encryption in transit using TLS 1.2&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Compute.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205278iF91D04D61CACEA97/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Compute.png" alt="More information available in the updated compute creation panel" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;More information available in the updated compute creation panel&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-designer" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Designer&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&lt;STRONG&gt;&amp;nbsp;(preview)&lt;/STRONG&gt;: Improved performance and reliability.&amp;nbsp;Updates to user experience and new features:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;New graph engine, with new-style modules. Modules have colored side bars to show the status and can be resized.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;New asset library, to split Datasets, Modules, Models into 3 tabs&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Output setting. Enable user to set module output datastores.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;New modules:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Computer Vision: Support image dataset preprocessing, and train&amp;nbsp;PyTorch&amp;nbsp;models (ResNet/DenseNet), and score for image classification&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Recommendation: Support&amp;nbsp;Wide&amp;amp;Deep&amp;nbsp;recommender&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Designer.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205281i7F28FF9A6C2618D5/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Designer.png" alt="New style to Modules in the drag-and-drop Designer" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;New style to Modules in the drag-and-drop Designer&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-labeling-projects" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Data Labeling&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;:&amp;nbsp;Create, manage, and monitor labeling projects directly inside the studio web experience.&amp;nbsp;Coordinate data, labels, and team members to efficiently manage labeling tasks.&amp;nbsp;Supports&amp;nbsp;image classification, either multi-label or multi-class, and object identification with bounding&amp;nbsp;boxes. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The machine learning assisted labeling feature&amp;nbsp;(Preview)&amp;nbsp;lets you trigger automatic machine learning models to accelerate the labeling task.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Learn more about Azure Machine Learning data labeling in &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/accelerate-labeling-productivity-by-using-aml-data-labeling/ba-p/1479869" target="_blank" rel="noopener"&gt;this blog post&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Data labeling.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205283iB46516270662B102/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Data labeling.png" alt="Data labeling updated style and machine learning assisted labeling" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Data labeling updated style and machine learning assisted labeling&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-fairness-aml" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;&lt;STRONG&gt;Fairlearn&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;STRONG&gt;(preview)&lt;/STRONG&gt;:&amp;nbsp;Azure Machine Learning is used for managing the artifacts in your model training and deployment process. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With the new fairness capabilities, users can store and track their models’ fairness (disparity) insights in Azure Machine Learning studio, easily share their models’ fairness learnings among different stakeholders. Beyond logging fairness insights within Azure Machine Learning run history, users can load&amp;nbsp;Fairlearn’s&amp;nbsp;visualization dashboard in studio to interact with mitigated or original models’ predictions and fairness insights, select a pleasant model, and register/deploy the model for scoring time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Fairlearn.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205284i66D890DFB13B0075/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - Fairlearn.png" alt="Fairlearn visualization now available as preview in the studio" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Fairlearn visualization now available as preview in the studio&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Automated machine learning&lt;/STRONG&gt;&lt;/A&gt;&amp;nbsp;&lt;STRONG&gt;user interface (preview) &lt;/STRONG&gt;Automated machine learning is the process of automating the time-consuming, iterative tasks of machine learning model development to enable non data scientists to operationalize&lt;SPAN&gt; their machine learning models.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The new Data Guardrails helps fix and alert users of potential data issues. The model details tab includes key information around the best model and the run. There is more control over which visualizations are generated - choose a metric of interest and visualizations pertaining to that metric will display.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - AutoML.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205285iB09C1CC715845319/image-size/large?v=v2&amp;amp;px=999" role="button" title="Blog - AutoML.jpg" alt="Data guardrails in automated machine learning will alert for issues in the data and even fix some of them" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Data guardrails in automated machine learning will alert for issues in the data and even fix some of them&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Continuing the journey together&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Our customers inspire us to continue the journey, building together experiences that make machine learning easier to use, productive, and fun!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Send us your feedback&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Use the feedback panel to share your thoughts with us&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Blog - Feedback.JPG" style="width: 313px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/205538iA56BCEBD429F7441/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Blog - Feedback.JPG" alt="Feedback panel to share your thoughts" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Feedback panel to share your thoughts&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Log in to &lt;A href="https://ml.azure.com/" target="_blank" rel="noopener"&gt;Azure Machine Learning studio&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Learn more&lt;/A&gt; about Azure Machine Learning service.&lt;/LI&gt;
&lt;LI&gt;See the&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/updates/?category=ai-machine-learning" target="_blank" rel="noopener"&gt;latest product announcements&lt;/A&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Jul 2020 18:51:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-machine-learning-studio-a-web-interface-for-managing-the/ba-p/1521780</guid>
      <dc:creator>Tzvi Keisar</dc:creator>
      <dc:date>2020-07-15T18:51:52Z</dc:date>
    </item>
    <item>
      <title>Using VS Code to enhance your machine learning experience</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/using-vs-code-to-enhance-your-machine-learning-experience/ba-p/1506735</link>
      <description>&lt;DIV&gt;&lt;SPAN&gt;Hey&amp;nbsp;AML&amp;nbsp;community! The VS Code team is excited to present new capabilities we've added to the Azure Machine Learning (AML) extension. From version 0.6.12 onwards we've introduced UI changes and ways to help you manage Datastores, Datasets, and Compute instances all from directly within your favourite editor!&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;We're guessing many of&amp;nbsp;you&amp;nbsp;may&amp;nbsp;be&amp;nbsp;reading&amp;nbsp;about&amp;nbsp;this&amp;nbsp;extension&amp;nbsp;for&amp;nbsp;the&amp;nbsp;first&amp;nbsp;time&amp;nbsp;-&amp;nbsp;don't&amp;nbsp;worry,&amp;nbsp;we're&amp;nbsp;here&amp;nbsp;to&amp;nbsp;explain!&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;extension&amp;nbsp;is&amp;nbsp;a&amp;nbsp;companion&amp;nbsp;tool&amp;nbsp;to&amp;nbsp;the&amp;nbsp;AML&amp;nbsp;service.&amp;nbsp;It&amp;nbsp;provides&amp;nbsp;a&amp;nbsp;guided&amp;nbsp;experience&amp;nbsp;to&amp;nbsp;help&amp;nbsp;you&amp;nbsp;create&amp;nbsp;and&amp;nbsp;manage&amp;nbsp;your&amp;nbsp;AML&amp;nbsp;resources&amp;nbsp;from&amp;nbsp;directly&amp;nbsp;within&amp;nbsp;VS&amp;nbsp;Code.&amp;nbsp;The&amp;nbsp;extension&amp;nbsp;aims&amp;nbsp;to&amp;nbsp;streamline&amp;nbsp;tasks&amp;nbsp;such&amp;nbsp;as&amp;nbsp;running&amp;nbsp;experiments,&amp;nbsp;creating&amp;nbsp;compute&amp;nbsp;targets,&amp;nbsp;and&amp;nbsp;managing&amp;nbsp;environments,&amp;nbsp;without&amp;nbsp;requiring&amp;nbsp;the&amp;nbsp;context-switch&amp;nbsp;from&amp;nbsp;the&amp;nbsp;editor&amp;nbsp;to&amp;nbsp;the&amp;nbsp;browser.&amp;nbsp;With&amp;nbsp;an&amp;nbsp;easy-to-navigate&amp;nbsp;tree&amp;nbsp;view&amp;nbsp;you&amp;nbsp;can&amp;nbsp;work&amp;nbsp;across&amp;nbsp;all&amp;nbsp;your&amp;nbsp;workspaces&amp;nbsp;and&amp;nbsp;interact&amp;nbsp;with&amp;nbsp;your&amp;nbsp;core&amp;nbsp;AML&amp;nbsp;assets&amp;nbsp;using&amp;nbsp;single-click&amp;nbsp;commands.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;you'd&amp;nbsp;like&amp;nbsp;to&amp;nbsp;learn&amp;nbsp;more&amp;nbsp;and&amp;nbsp;experiment&amp;nbsp;with&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;you&amp;nbsp;can&amp;nbsp;install&amp;nbsp;it &lt;A title="AML extension install page" href="http://aka.ms/aml-ext" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;and&amp;nbsp;try&amp;nbsp;the&amp;nbsp;getting&amp;nbsp;started&amp;nbsp;docs &lt;A title="Extension tutorial page" href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-setup-vscode-extension" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN&gt;!&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Datastore Integration&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;One&amp;nbsp;of&amp;nbsp;the&amp;nbsp;new&amp;nbsp;features&amp;nbsp;we&amp;nbsp;released&amp;nbsp;is&amp;nbsp;the&amp;nbsp;support&amp;nbsp;for&amp;nbsp;Datastore&amp;nbsp;registration.&amp;nbsp;The&amp;nbsp;extension&amp;nbsp;currently&amp;nbsp;supports&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;Azure Blob Storage&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;and&amp;nbsp;&lt;STRONG&gt;Azure File Share&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;datastore&amp;nbsp;types.&amp;nbsp;We've&amp;nbsp;designed&amp;nbsp;a&amp;nbsp;set&amp;nbsp;of&amp;nbsp;streamlined&amp;nbsp;input&amp;nbsp;options&amp;nbsp;to&amp;nbsp;enable&amp;nbsp;faster&amp;nbsp;registrations, such as automatic retrieval of your Account Key credentials to authenticate against the storage account.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="register_datastore.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203902i8A34B1C099EC4D71/image-size/large?v=v2&amp;amp;px=999" role="button" title="register_datastore.png" alt="Register a Blob or File-based datastore in a highly streamlined manner" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Register a Blob or File-based datastore in a highly streamlined manner&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;STRONG&gt;Dataset Integration&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;The&amp;nbsp;extension&amp;nbsp;also&amp;nbsp;supports&amp;nbsp;creating&amp;nbsp;Tabular&amp;nbsp;and&amp;nbsp;File datasets&amp;nbsp;from&amp;nbsp;&lt;STRONG&gt;local files&lt;/STRONG&gt; or&amp;nbsp;&lt;STRONG&gt;web URLs&lt;/STRONG&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="create_dataset.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203905i307D2FBB3F3ACC9D/image-size/large?v=v2&amp;amp;px=999" role="button" title="create_dataset.png" alt="Create a Tabular or File Dataset via the extension tree view" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Create a Tabular or File Dataset via the extension tree view&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Once&amp;nbsp;you've&amp;nbsp;created&amp;nbsp;a&amp;nbsp;Tabular&amp;nbsp;dataset,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;use&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;to&amp;nbsp;preview&amp;nbsp;your&amp;nbsp;data&amp;nbsp;from&amp;nbsp;directly&amp;nbsp;within&amp;nbsp;the&amp;nbsp;editor.&amp;nbsp;In&amp;nbsp;the&amp;nbsp;case&amp;nbsp;of&amp;nbsp;parquet&amp;nbsp;data,&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;may&amp;nbsp;require&amp;nbsp;a&amp;nbsp;profile&amp;nbsp;run&amp;nbsp;before&amp;nbsp;previewing&lt;/SPAN&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="preview_dataset.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203925i69FF9BD944073EDD/image-size/large?v=v2&amp;amp;px=999" role="button" title="preview_dataset.png" alt="Preview tabular dataset and filter rows." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Preview tabular dataset and filter rows.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Via&amp;nbsp;the&amp;nbsp;extension,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;use&amp;nbsp;your&amp;nbsp;datasets&amp;nbsp;during&amp;nbsp;training&amp;nbsp;without&amp;nbsp;having&amp;nbsp;to&amp;nbsp;write&amp;nbsp;extra&amp;nbsp;AML&amp;nbsp;SDK&amp;nbsp;code.&amp;nbsp;Right&amp;nbsp;before&amp;nbsp;submitting,&amp;nbsp;you're&amp;nbsp;shown&amp;nbsp;a&amp;nbsp;partial&amp;nbsp;run&amp;nbsp;configuration&amp;nbsp;which&amp;nbsp;abstracts&amp;nbsp;the&amp;nbsp;complexities&amp;nbsp;of&amp;nbsp;referencing&amp;nbsp;your&amp;nbsp;datasets&amp;nbsp;through&amp;nbsp;an&amp;nbsp;estimator.&amp;nbsp;In&amp;nbsp;the&amp;nbsp;configuration,&amp;nbsp;you&amp;nbsp;just&amp;nbsp;need&amp;nbsp;to&amp;nbsp;input&amp;nbsp;the&amp;nbsp;script&amp;nbsp;parameter&amp;nbsp;and&amp;nbsp;attach&amp;nbsp;mechanism&amp;nbsp;you&amp;nbsp;want&amp;nbsp;to&amp;nbsp;use&amp;nbsp;for&amp;nbsp;File&amp;nbsp;datasets,&amp;nbsp;and&amp;nbsp;the&amp;nbsp;named&amp;nbsp;input&amp;nbsp;you'd&amp;nbsp;like&amp;nbsp;for&amp;nbsp;Tabular&amp;nbsp;datasets.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;LI-CODE lang="json"&gt;"datasets": {
    // file dataset input
    "mnist-ds": {
        "version": 1,
        "scriptParam": "--data-folder",
        "attachMechanism": "Mount"
    },
    // tabular dataset input
    "titanic-ds": {
        "version": 1,
        "namedInput": "titanic_ds"
    }
}&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Compute Instance Integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Creating&amp;nbsp;and&amp;nbsp;managing&amp;nbsp;compute&amp;nbsp;instances&amp;nbsp;has&amp;nbsp;never&amp;nbsp;been&amp;nbsp;easier!&amp;nbsp;You&amp;nbsp;can&amp;nbsp;view&amp;nbsp;all&amp;nbsp;your&amp;nbsp;workspace's&amp;nbsp;compute&amp;nbsp;instances&amp;nbsp;and&amp;nbsp;start/stop/restart&amp;nbsp;them&amp;nbsp;through&amp;nbsp;commands&amp;nbsp;in&amp;nbsp;the&amp;nbsp;tree.&amp;nbsp;With&amp;nbsp;a&amp;nbsp;small&amp;nbsp;number&amp;nbsp;of&amp;nbsp;clicks,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;create&amp;nbsp;an&amp;nbsp;SSH-enabled&amp;nbsp;compute&amp;nbsp;instance&amp;nbsp;from&amp;nbsp;directly&amp;nbsp;within&amp;nbsp;VS&amp;nbsp;Code.&amp;nbsp;Upon&amp;nbsp;creating&amp;nbsp;an&amp;nbsp;SSH-enabled&amp;nbsp;compute&amp;nbsp;instance,&amp;nbsp;you&amp;nbsp;can&amp;nbsp;follow&amp;nbsp;our&amp;nbsp;in-editor&amp;nbsp;documentation&amp;nbsp;to&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;easily&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;connect&amp;nbsp;to&amp;nbsp;your&amp;nbsp;compute&amp;nbsp;via&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;A title="VS Code Remote SSH Extension Documentation" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;VS&amp;nbsp;Code&amp;nbsp;Remote&amp;nbsp;SSH&amp;nbsp;extension&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="manage_compute_instance.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203932i34F528FEC3535F3C/image-size/large?v=v2&amp;amp;px=999" role="button" title="manage_compute_instance.png" alt="Manage compute instances and connect to them via SSH" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Manage compute instances and connect to them via SSH&lt;/span&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;UI Changes&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;Something&amp;nbsp;we've&amp;nbsp;been&amp;nbsp;hearing&amp;nbsp;for&amp;nbsp;a&amp;nbsp;long&amp;nbsp;time&amp;nbsp;is&amp;nbsp;how&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;UI&amp;nbsp;differs&amp;nbsp;from&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;A title="Azure ML Studio" href="https://ml.azure.com" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Azure&amp;nbsp;ML&amp;nbsp;Studio&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp;In&amp;nbsp;the&amp;nbsp;previous&amp;nbsp;photos you&amp;nbsp;may&amp;nbsp;have&amp;nbsp;already&amp;nbsp;noticed&amp;nbsp;the&amp;nbsp;highly&amp;nbsp;consistent&amp;nbsp;design&amp;nbsp;in&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;tree&amp;nbsp;view.&amp;nbsp;We've&amp;nbsp;updated&amp;nbsp;each&amp;nbsp;node&amp;nbsp;with&amp;nbsp;Studio-equivalent&amp;nbsp;icons&amp;nbsp;and&amp;nbsp;have&amp;nbsp;renamed/reordered&amp;nbsp;them&amp;nbsp;where&amp;nbsp;appropriate.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Feedback&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;As&amp;nbsp;mentioned&amp;nbsp;throughout&amp;nbsp;the&amp;nbsp;blog&amp;nbsp;post,&amp;nbsp;many&amp;nbsp;of&amp;nbsp;the&amp;nbsp;newly&amp;nbsp;released&amp;nbsp;features&amp;nbsp;are&amp;nbsp;in&amp;nbsp;their&amp;nbsp;preliminary&amp;nbsp;phases&amp;nbsp;and&amp;nbsp;we're&amp;nbsp;actively&amp;nbsp;working&amp;nbsp;to&amp;nbsp;support&amp;nbsp;a&amp;nbsp;broader&amp;nbsp;set&amp;nbsp;of&amp;nbsp;scenarios&amp;nbsp;that&amp;nbsp;are&amp;nbsp;consistent&amp;nbsp;with&amp;nbsp;the&amp;nbsp;Azure&amp;nbsp;ML&amp;nbsp;Studio&amp;nbsp;and&amp;nbsp;SDK&amp;nbsp;experiences.&amp;nbsp;Here&amp;nbsp;are&amp;nbsp;some&amp;nbsp;of&amp;nbsp;the&amp;nbsp;scenarios&amp;nbsp;we're&amp;nbsp;actively&amp;nbsp;working&amp;nbsp;on:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Running your Notebooks in VS Code directly on an AML compute instance.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Building and working in Docker containers from an AML environment.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Creating datasets from an existing blob or file-based datastore.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Using AML environments when deploying an endpoint.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;there's&amp;nbsp;anything&amp;nbsp;that&amp;nbsp;you&amp;nbsp;would&amp;nbsp;like&amp;nbsp;us&amp;nbsp;to&amp;nbsp;prioritize,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;let&amp;nbsp;us&amp;nbsp;know&amp;nbsp;on&amp;nbsp;&lt;/SPAN&gt;&lt;A title="AML extension github page" href="https://github.com/microsoft/vscode-tools-for-ai/issues/new" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Github&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;!&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;BR /&gt;
&lt;DIV&gt;&lt;SPAN&gt;If&amp;nbsp;you're&amp;nbsp;an&amp;nbsp;existing&amp;nbsp;user&amp;nbsp;of&amp;nbsp;the&amp;nbsp;extension&amp;nbsp;and&amp;nbsp;would&amp;nbsp;like&amp;nbsp;to&amp;nbsp;provide&amp;nbsp;feedback,&amp;nbsp;please&amp;nbsp;feel&amp;nbsp;free&amp;nbsp;to&amp;nbsp;do&amp;nbsp;so&amp;nbsp;via&amp;nbsp;our&amp;nbsp;&lt;/SPAN&gt;&lt;A title="AML extension survey" href="http://aka.ms/aml-ext-survey" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;survey&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Wed, 08 Jul 2020 22:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/using-vs-code-to-enhance-your-machine-learning-experience/ba-p/1506735</guid>
      <dc:creator>Sid_Unnithan</dc:creator>
      <dc:date>2020-07-08T22:30:00Z</dc:date>
    </item>
    <item>
      <title>Accelerate extraction of text, data and structure from your documents with Form Recognizer</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-extraction-of-text-data-and-structure-from-your/ba-p/1507365</link>
      <description>&lt;P&gt;&lt;EM&gt;This blog has been authored by Neta Haiby (Principal PM, Azure AI) and Prachi Jain (PMM, Azure AI)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Documents are prevalent and often contain vital information that are essential to drive business outcomes; however, extracting data quickly and accurately for processing is often a challenge for so many organizations. Manual extraction can take up long processing cycles, cause errors and inefficiencies. Hence, extracting text and structure from documents with Form Recognizer helps tackle these challenges swiftly and boost productivity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to announce the general availability (GA) release of Form Recognizer. You can now extract text, tables, and key value pairs quickly and accurately from documents. It will support multi-page documents (Images, PDFs and Tiff files) and extract a structured representation output of the document and its contents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO size="medium" vid="https://youtu.be/4nyosYeh68w" align="left" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/4nyosYeh68w/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Form Recognizer comprises the following:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1. Layout&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Detects and extracts text and tables structure extraction:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="layout.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204031iB4C51149F2CBD0E1/image-size/large?v=v2&amp;amp;px=999" role="button" title="layout.png" alt="layout.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to use and get started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;You can use Form Recognizer Layout to recognize tables, text lines and words in documents, without needing to train a model. To get started you can use the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Extract the layout of a document, using the&amp;nbsp;&lt;STRONG&gt;StartRecognizeContentFromUri&lt;/STRONG&gt;&amp;nbsp;method in the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#recognize-form-content" target="_blank" rel="noopener"&gt;Form Recognizer client library&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Follow this QuickStart to extract the layout of the a document &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-layout" target="_blank" rel="noopener"&gt;using the REST API&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;2. Pre-built&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These are pre-trained models for common scenarios that extract value of interest from documents. The pre-built receipts model that extracts data from receipts is in general availability today.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="prebuilt.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204033i57F20E7AB6231631/image-size/large?v=v2&amp;amp;px=999" role="button" title="prebuilt.png" alt="prebuilt.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to use and get started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;You can use Form Recognizer to extract common fields from receipts, using a pre-trained receipt model. To get started you can use the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Extract data from receipts using the&lt;STRONG&gt; StartRecognizeReceiptsFromUri&lt;/STRONG&gt;&amp;nbsp;method in the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#recognize-form-content" target="_blank" rel="noopener"&gt;Form Recognizer client library&lt;/A&gt;.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Follow this QuickStart to extract data from receipts &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts" target="_blank" rel="noopener"&gt;using the REST API&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;3. Custom&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This custom service lets you train with your own data to learn the structure of your documents&amp;nbsp;in an intelligent way with unsupervised and supervised learning&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="custom.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204005iC1735051CB8ECFFD/image-size/large?v=v2&amp;amp;px=999" role="button" title="custom.png" alt="custom.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How to use and get started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;You can train custom models tailored to your own documents. A trained model can output structured data that includes the text, tables, and key value pairs relationships in the original form document. After you train the model, you can test it and eventually use it to reliably extract data from more forms according to your needs. To get started you can use the following:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Train a custom model without labels and analyze your data using the method Custom Form Model in the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#recognize-form-content" target="_blank" rel="noopener"&gt;Form Recognizer client library&lt;/A&gt; or using the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;REST API&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Train a custom model with labels and analyze your data &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool" target="_blank" rel="noopener"&gt;follow this QuickStart&lt;/A&gt; and try out Form Recognizer sample labeling tool located &lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;here.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Code examples&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;These code snippets show you how to do the following tasks with the Form Recognizer client library for .NET:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#authenticate-the-client" target="_blank" rel="noopener"&gt;Authenticate the client&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="authenticate the client.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204006i0DD7F768F7870896/image-size/large?v=v2&amp;amp;px=999" role="button" title="authenticate the client.png" alt="authenticate the client.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#recognize-form-content" target="_blank" rel="noopener"&gt;Extract text and tables from documents using Layout&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="extract tables.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204007i5980519137858B1E/image-size/large?v=v2&amp;amp;px=999" role="button" title="extract tables.png" alt="extract tables.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#recognize-receipts" target="_blank" rel="noopener"&gt;Recognize receipts&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="recognize reciepts.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204008i258EB14BB13FF26E/image-size/large?v=v2&amp;amp;px=999" role="button" title="recognize reciepts.png" alt="recognize reciepts.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#train-a-custom-model" target="_blank" rel="noopener"&gt;Train a custom model&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="train cutsom model 1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204010i524F5E8DFBF51BDB/image-size/large?v=v2&amp;amp;px=999" role="button" title="train cutsom model 1.png" alt="train cutsom model 1.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#analyze-forms-with-a-custom-model" target="_blank" rel="noopener"&gt;Analyze forms with a custom model&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="analyze forms.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204012iA994A8867F01C6E4/image-size/large?v=v2&amp;amp;px=999" role="button" title="analyze forms.png" alt="analyze forms.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=windows&amp;amp;pivots=programming-language-csharp#manage-your-custom-models" target="_blank" rel="noopener"&gt;Manage your custom models&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="manage your custom models.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204013iB67DCA12D9271D0E/image-size/large?v=v2&amp;amp;px=999" role="button" title="manage your custom models.png" alt="manage your custom models.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;How partners have built solutions with Form Recognizer&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="AA Logo.png" style="width: 151px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204041i3B9A171BAD0A54BB/image-size/small?v=v2&amp;amp;px=200" role="button" title="AA Logo.png" alt="AA Logo.png" /&gt;&lt;/span&gt;" Automation Anywhere has expanded the capabilities of IQ Bot to include Form Recognizer with an easy-to-use, “IQ Bot Forms” solution which combines the power of Microsoft Cognitive Services with IQ Bot and RPA to accelerate the end-to-end processing of complex documents. Use cases supported include Driver's Licenses, Insurance Claims and Tax Forms. This highly secure solution, comprises Automation Anywhere RPA with native Intelligent Document Processing (IDP) and Azure Cognitive Services Computer Vision API and Form Recognizer." &lt;FONT color="#3366ff"&gt;&lt;EM&gt;Shobhana Viswanathan, Director of Business Development, Automation Anywhere&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="blue prism logo.png" style="width: 151px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204042iDFC5BBD64CBA247C/image-size/small?v=v2&amp;amp;px=200" role="button" title="blue prism logo.png" alt="blue prism logo.png" /&gt;&lt;/span&gt;“Blue Prism has always been committed to creating a connected-RPA platform that makes it easy for our customers to consume the best in AI and machine learning technologies. As part of this commitment to innovation, we recently released a &lt;A href="https://digitalexchange.blueprism.com/dx/entry/3439/solution/form-recognizer-azure-cloud" target="_blank" rel="noopener"&gt;Form Recognizer API skill&lt;/A&gt; that gives our customers the power to quickly add deep-learning algorithms, advanced machine learning, and key value pair extraction to any Blue Prism process&lt;EM&gt;.” &lt;FONT color="#3366ff"&gt;Colin Redbond - Senior Vice President – Emerging Technologies at Blue Prism&lt;/FONT&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="icertis.png" style="width: 152px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204018i8572175BFD35CFF3/image-dimensions/152x111?v=v2" width="152" height="111" role="button" title="icertis.png" alt="icertis.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;“Icertis’ suite of AI technologies use machine learning to help understand the contract, its obligations and its environment better. Form Recognizer is an important tool in that arsenal that helps identify structured data in forms quickly and accurately. With a very simple training interface, it empowers the Icertis Contract Management platform users to effectively incorporate AI in their day-to-day processes while ensuring that their data is safe and protected – important steps in our vision of making contracting simple, yet powerful.” &lt;FONT color="#3366ff"&gt;&lt;EM&gt;Monish Darda, CTO and Co-founder, Icertis&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="neudesic logo.png" style="width: 151px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204043i45D83967A3E646DF/image-size/small?v=v2&amp;amp;px=200" role="button" title="neudesic logo.png" alt="neudesic logo.png" /&gt;&lt;/span&gt;“With the power of forms recognizer, Neudesic was able to create a simple interface for business users to extract data from multiple unique document sets, each containing complex data structures and dozens of data points, including tables. Users simply provide sample documents and label their data - no need to understand how Form Recognizer, or the other powerful Cognitive Services involved, should be applied, dramatically simplifying how AI can be applied to their processes.” &lt;FONT color="#3366ff"&gt;&lt;EM&gt;Ken Kuzdas, Artificial Intelligence and Process Automation Lead, Neudesic&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="uipath logo.png" style="width: 151px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204036iBACEE4B7C97F68BC/image-size/small?v=v2&amp;amp;px=200" role="button" title="uipath logo.png" alt="uipath logo.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;“UiPath remains committed to an open Platform and building integrations with partner AI services so you can automate document processing using your service of choice. The &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fconnect.uipath.com%2Fmarketplace%2Fcomponents%2Fmicrosoft-azure-form-recognizer-v2&amp;amp;data=02%7C01%7Cnetahw%40microsoft.com%7Ce83a996363474bf2a77c08d811f61525%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637279095909781376&amp;amp;sdata=5z10PTXOAwkt2oZuTmaAbE8ys%2FqxEL4QaR28Fosgkbg%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;UiPath Activity Pack for Microsoft Azure&lt;/STRONG&gt; &lt;STRONG&gt;Form Recognizer&lt;/STRONG&gt;&lt;/A&gt; make it easy to automate tasks that involve document data – for example, reading invoices, timesheets, tables, and reports. By combining AI-powered document extraction services from Microsoft with the industry-leading UiPath Enterprise RPA Platform, you can extract data from any document using the service of your choice – and easily leverage this data in your automated processes.” &lt;FONT color="#3366ff"&gt;&lt;EM&gt;Brandon Brown, Director Integrations and Solutions Delivery,UiPath&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Independent benchmark testing results&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Cazton,a top leader in IT and Software Consulting, Training and Recruiting services across United States, Canada and Europe performed an independent study comparing available cloud offerings for recognizing form data in the cloud and concluded that Azure Form Recognizer does a fantastic job in creating a viable solution with just five sample documents. It performs end-to-end Optical Character Recognition (OCR) on handwritten as well as digital documents with an amazing accuracy score and in just three seconds.&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000ff"&gt;&lt;EM&gt;Chander Dhall, CEO of Cazton&lt;/EM&gt;&lt;/FONT&gt; quotes that “I am impressed with Microsoft's focus on creating artificial intelligence powered solutions that have practical uses in the enterprise.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;New in this release&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;In this release we are introducing the following new features:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;1. Enhanced security features&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#3366ff"&gt;Bring your own key&lt;BR /&gt;&lt;/FONT&gt;Form Recognizer automatically encrypts your data when persisted it to the cloud. Form Recognizer encryption protects your data and to help you to meet your organizational security&amp;nbsp; and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. However, you can now also manage your subscription with your own encryption keys.&amp;nbsp; Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility&amp;nbsp;to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. Learn more &lt;A style="font-family: inherit; background-color: #ffffff;" href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/form-recognizer-encryption-of-data-at-rest" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#3366ff"&gt;Private endpoints&lt;BR /&gt;&lt;/FONT&gt;Enables you on a virtual network (VNet) to securely access data over a&amp;nbsp;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://docs.microsoft.com/en-us/azure/private-link/private-link-overview" target="_blank" rel="noopener"&gt;Private Link&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;2. Better Accuracy&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#3366ff"&gt;Table enhancements and Extraction enhancements&lt;BR /&gt;&lt;/FONT&gt;This feature includes extraction enhancements, accuracy improvements and table extractions&amp;nbsp; &amp;nbsp; enhancements, specifically, the capability&amp;nbsp;to learn tables headers and structures in custom train without labels.&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#3366ff"&gt;Currency support&lt;BR /&gt;&lt;/FONT&gt;Helps detection and extraction of global currency symbols&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;3. Extended Availability&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#3366ff"&gt;Azure Gov&lt;BR /&gt;&lt;/FONT&gt;&amp;nbsp;Form Recognizer is &lt;A style="font-family: inherit; background-color: #ffffff;" href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm" target="_blank" rel="noopener"&gt;available in 22 commercial regions&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt; and also in &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdevblogs.microsoft.com%2Fazuregov%2Fextract-data-from-print-and-handwritten-documents-using-azure-form-recognizer&amp;amp;data=02%7C01%7Cnetahw%40microsoft.com%7Ce096fac64cdb46129d1008d8220e8cfc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637296793192123359&amp;amp;sdata=4%2BdMSQVCShPLcjKAFyCu1U%2FRjPrZUiUtG3rEdsF5BiQ%3D&amp;amp;reserved=0" target="_self"&gt;Azure Gov&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Get started &lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;To get started create a Form Recognizer resource in the &lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;Azure Portal&lt;/A&gt; and follow one of &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/form-recognizer-encryption-of-data-at-rest" target="_blank" rel="noopener"&gt;our quick starts&lt;/A&gt; to extract data from your documents.&lt;/LI&gt;
&lt;LI&gt;To learn more about Form Recognizer and the rest of the Azure AI ecosystem, please visit our&amp;nbsp;&lt;A href="https://aka.ms/form-recognizer" target="_blank" rel="noopener"&gt;website&lt;/A&gt;&amp;nbsp;and read the&amp;nbsp;&lt;A href="https://aka.ms/form-recognizer/docs" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;For additional questions please reach out to us at&amp;nbsp;&lt;A href="mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener"&gt;formrecog_contact@microsoft.com&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jul 2020 16:20:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-extraction-of-text-data-and-structure-from-your/ba-p/1507365</guid>
      <dc:creator>pracjain</dc:creator>
      <dc:date>2020-07-09T16:20:11Z</dc:date>
    </item>
    <item>
      <title>Easily add voice commands to your apps with Custom Commands</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/easily-add-voice-commands-to-your-apps-with-custom-commands/ba-p/1503443</link>
      <description>&lt;P&gt;Voice-enabled assistants—that enable users to search, ask questions, complete tasks, and much more—have been gaining increasing momentum and becoming integrated into consumer’s daily lives. Voice enables more seamless, natural interfaces, providing a more intuitive way of interacting with technology. In our current environment, voice and contactless experiences will play an increasingly important role, with the United States alone already seeing a 20 percent increase in preference for contactless operations (&lt;A href="https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/adapting-customer-experience-in-the-time-of-coronavirus" target="_blank" rel="noopener"&gt;McKinsey 2020&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We saw some of this vision at &lt;A href="https://youtu.be/eNhYTLWQFeg?t=2450" target="_blank" rel="noopener"&gt;Microsoft Build 2020 earlier this year where CTO Kevin Scott discussed emergent trends&lt;/A&gt; on the path to reshape software development, including the convergence of physical and digital worlds. One example centered around voice tech for food delivery, in which Boston Dynamics’ Spot robot completed curb-side deliveries using voice interaction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 70%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204028i792CA00BD5961A9E" border="0" title="ezgif.com-resize (4).gif" alt="ezgif.com-resize (4).gif" /&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are committed to enabling developers and designers to build innovative voice-enabled solutions. To help make it easier to build voice commanding applications, today, we’re excited to announce the general availability of &lt;A href="https://speech.microsoft.com/customcommands" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Custom Commands&lt;/STRONG&gt;&lt;/A&gt;.&amp;nbsp;Custom Commands&amp;nbsp;is a capability of &lt;A href="https://azure.microsoft.com/services/cognitive-services/speech-services/" target="_blank" rel="noopener"&gt;Speech&lt;/A&gt;&amp;nbsp;in Azure Cognitive Services, that streamlines the process for creating task-oriented voice applications, providing a unified authoring experience with relatively lower complexity,&amp;nbsp;helping you focus on building the best solution for your voice commanding scenarios.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Voice applications such as voice assistants listen to users and take an action in response. They involve transcribing the user's speech, taking action on the text using natural language processing, and using voice to respond with text-to-speech. Custom Commands brings together the best of Speech and Language in Azure Cognitive Services—&lt;A href="https://azure.microsoft.com/services/cognitive-services/speech-to-text/" target="_blank" rel="noopener"&gt;Speech to Text&lt;/A&gt; for speech recognition, &lt;A href="https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/" target="_blank" rel="noopener"&gt;Language Understanding&lt;/A&gt; for capturing spoken entities with speech adaptation, and voice response with &lt;A href="https://azure.microsoft.com/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Text to Speech&lt;/A&gt;, to accelerate the addition of voice capabilities to your apps iteratively and with low-code authoring experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Custom Commands is best suited for task completion or command-and-control scenarios that have a well-defined set of variables. In addition to the voice-activated delivery example with Boston Dynamics’ Spot, Custom Commands supports solutions in a variety of verticals including hospitality, automotive and retail. For example, you can build in-room voice-controlled experiences for your guests, enable in-vehicle communication and entertainment systems, or manage store inventory with an ambient smart speaker.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-270px"&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=FbkFc3zXhI8" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/FbkFc3zXhI8/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Building Voice Assistants&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Speech in Azure Cognitive Services provides solutions for building voice assistants that are tailored for your use case. Custom Commands streamlines the process for creating voice-enabled apps for simple task completion (with a fixed vocabulary and defined set of variables).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For some scenarios, you may also need a solution that handles more complex conversational interactions. For flexible voice assistants designed for open-ended conversational scenarios, &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/direct-line-speech" target="_blank" rel="noopener"&gt;Direct Line Speech&lt;/A&gt; &lt;SPAN&gt;enables you to build a robust solution that is optimized for &lt;/SPAN&gt;voice-in, voice-out interaction with bots.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here’s a sample reference architecture for an end-to-end voice assistant supported by Custom Commands:&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/202989i12B4EF81F1B1F258" border="0" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Customization and Extensibility&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Custom Commands, our goal is to simplify the process of creating a unique voice-first experience that reflects your brand. You can &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-commands-create-application-with-simple-commands" target="_blank" rel="noopener"&gt;configure multiple commands&lt;/A&gt; for commanding or task completion, &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-commands-add-parameters-to-commands" target="_blank" rel="noopener"&gt;add parameters&lt;/A&gt; and conditions to a particular task before completion, or &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-commands-add-interaction-rules" target="_blank" rel="noopener"&gt;configure interaction rules&lt;/A&gt; to handle confirmation prompts or one-step correction to help disambiguate.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Publish your app and integrate it with any client app &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk" target="_blank" rel="noopener"&gt;using the Speech SDK&lt;/A&gt;. You can follow our &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-commands-setup-speech-sdk" target="_blank" rel="noopener"&gt;documentation to integrate using the Speech SDK for C#&lt;/A&gt;, or build your own using our Speech SDK, which is available in multiple languages on various platforms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It is very easy to integrate your app with the Speech SDK. Start by specifying the applicationId, subscriptionKey and region.&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="csharp"&gt;// Your application id
const string customCommandsApplicationId = "YourApplicationId"; 
// Your subscription key
const string customSubscriptionKey = "YourSpeechSubscriptionKey";
// The subscription service region.
const string region = "YourServiceRegion"; 

var customCommandsConfig = CustomCommandsConfig.FromSubscription(customCommandsApplicationId, customSubscriptionKey, region);
&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;P&gt;Then configure your client app to receive activity from the Custom Commands app.&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="csharp"&gt;// Implement event handlers
connector.ActivityReceived += (sender, activityReceivedEventArgs) =&amp;gt; { ... };&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;P&gt;Once you’ve published your app, consider &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/custom-keyword-overview" target="_blank" rel="noopener"&gt;adding a Custom Keyword&lt;/A&gt; to your app. Custom Keyword allows your product to be voice activated with a word or short phrase (for example, “Hey Cortana” is the keyword for the Cortana assistant).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Keywords generated using Custom Keyword can be &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk#add-custom-keyword-activation" target="_blank" rel="noopener"&gt;easily integrated with your device or application via the Speech SDK&lt;/A&gt;. Note that audio only starts streaming to the cloud (for verification that the user said the keyword) after the keyword has been detected locally on the user’s device.&lt;/P&gt;
&lt;DIV&gt;&lt;LI-CODE lang="csharp"&gt;// Start listening for keyword
var model = KeywordRecognitionModel.FromFile("YourKeywordModelFileName");
connector.StartKeywordRecognitionAsync(model);&lt;/LI-CODE&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, bring your app to life with natural-sounding voices using Text to Speech. You can either use one of our 100+ &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;out-of-the-box voices&lt;/A&gt;, or create a &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-voice#custom-neural-voices" target="_blank" rel="noopener"&gt;custom voice&lt;/A&gt; for your brand.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/202987i7180385A999FE3EE" border="0" title="Screen Shot 2020-07-02 at 4.54.53 PM.png" alt="Screen Shot 2020-07-02 at 4.54.53 PM.png" /&gt;&lt;/DIV&gt;
&lt;P&gt;In addition, we provide comprehensive support for your development workflow, including the ability to import/export your app and &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-custom-commands-deploy-cicd" target="_blank" rel="noopener"&gt;integrate with continuous deployment pipelines&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’re excited to see what you’ll build with Custom Commands.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started today&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="CustomCommands_microphone.gif" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/202988iBB0CFDB8ECE658D9/image-size/small?v=v2&amp;amp;px=200" role="button" title="CustomCommands_microphone.gif" alt="CustomCommands_microphone.gif" /&gt;&lt;/span&gt;&lt;SPAN style="font-family: inherit;"&gt;Get started today and check out our demos at &lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://speech.microsoft.com/customcommands" target="_blank" rel="noopener"&gt;https://speech.microsoft.com/customcommands&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more with our documentation: &lt;A href="http://aka.ms/speech/cc-docs" target="_blank" rel="noopener"&gt;Custom Commands documentation&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Follow the Quickstart: &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstart-custom-commands-application" target="_blank" rel="noopener"&gt;Create a voice assistant using Custom Commands&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Check out easy-to-deploy samples: &lt;A href="http://aka.ms/speech/cc-samples" target="_blank" rel="noopener"&gt;Voice Assistants GitHub repository&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;See the video tutorial: &lt;A href="https://www.youtube.com/watch?v=1zr0umHGFyc" target="_blank" rel="noopener"&gt;Building Voice Assistants using Custom Commands&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Jul 2020 18:19:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/easily-add-voice-commands-to-your-apps-with-custom-commands/ba-p/1503443</guid>
      <dc:creator>Vishesh Oberoi</dc:creator>
      <dc:date>2020-07-08T18:19:08Z</dc:date>
    </item>
    <item>
      <title>Introducing Text Analytics for Health</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-text-analytics-for-health/ba-p/1505152</link>
      <description>&lt;P&gt;&lt;FONT size="4"&gt;&lt;STRONG&gt;Hadas Bitran, Group Manager, Microsoft Healthcare&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The healthcare industry is overwhelmed with data. Much of this healthcare data is in the form of unstructured text, such as doctor’s notes, medical publications, electronic health records, clinical trials protocols, medical encounter transcripts and more. Healthcare organizations, providers, researchers, pharmaceutical companies, and others face an incredible challenge in trying to identify and draw insights from all that information. Unlocking insights from this data has massive potential for improving healthcare services and patient outcomes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are excited to introduce &lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health?tabs=ner" target="_blank" rel="noopener"&gt;Text Analytics for health&lt;/A&gt;, a new preview feature of Text Analytics in Azure Cognitive Services that enables developers to process and extract insights from unstructured medical data. Trained on a diverse range of medical data—covering various formats of clinical notes, clinical trials protocols, and more—the health feature is capable of processing a broad range of data types and tasks, without the need for time-intensive, manual development of custom models to extract insights from the data.&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203807i113493AE6DF87BE6/" border="0" title="Text Analytics for Health 3D illustration_07062020.jpg" alt="Text Analytics for Health 3D illustration_07062020.jpg" /&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Uncover deep insights and relationships in medical data&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Text Analytics for health, users can detect words and phrases mentioned in unstructured text as entities that can be associated with semantic types in the healthcare and biomedical domain, such as diagnosis, medication name, symptom/sign, examinations, treatments, dosage, and route of administration. (For full list of health entity types and relationships, see the &lt;A href="https://docs.microsoft.com/azure/cognitive-services/text-analytics/named-entity-types?tabs=health" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.) In addition, users can extract more than 100 types of personally identifiable information (PII), including&amp;nbsp;protected health information (PHI), in unstructured text.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203991i407C393C2FB13A73" border="0" title="TA for health image 1.png" alt="TA for health image 1.png" /&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;Text Analytics also links entities to medical ontologies and domain-specific coding systems (for example, the &lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://www.nlm.nih.gov/research/umls/sourcereleasedocs/index.html" target="_blank" rel="noopener"&gt;Unified Medical Language System&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt;), and identifies meaningful connections between concepts mentioned in text (for example, finding the relationship between a medication name and the dosage associated with it).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203993i657E3D4C80349308" border="0" title="TA for health image 2.png" alt="TA for health image 2.png" /&gt;&lt;/DIV&gt;
&lt;P&gt;The meaning of medical content is also highly affected by modifiers, such as negation, which can have a critical implication if misdiagnosed. For example, it is important for healthcare professionals to determine when a patient “has not been diagnosed with something” or “ does not experience a certain symptom.” The health feature supports negation detection for the different entities mentioned in text.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Speed time to healthcare insights&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Text Analytics for health enables researchers, data analysts, medical professionals and ISVs in the healthcare and biomedical space to unlock a wide range of scenarios—like producing analytics on historical medical data and creating prediction models, matching patients to clinical trials, or assisting in clinical quality reviews.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In response to the COVID-19 pandemic, Micosoft partnered with the &lt;A href="https://allenai.org/" target="_blank" rel="noopener"&gt;Allen Institute for AI&lt;/A&gt;&amp;nbsp;and leading research groups to prepare the &lt;A href="https://azure.microsoft.com/services/open-datasets/catalog/covid-19-open-research/" target="_blank" rel="noopener"&gt;COVID-19 Open Research Dataset&lt;/A&gt;&lt;SPAN&gt;,&lt;/SPAN&gt; a free resource of over 47,000 scholarly articles for use by the global research community. With Cognitive Search and Text Analytics, we developed the &lt;A href="https://covid19search.azurewebsites.net/" target="_blank" rel="noopener"&gt;COVID-19 search engine&lt;/A&gt;&lt;SPAN&gt;,&lt;/SPAN&gt; which enables researchers to more quickly evaluate and gain insights from the overwhelming amount of information about COVID-19.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=rBbk7ONsKF4&amp;amp;feature=emb_title" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/rBbk7ONsKF4/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;Learn more about using AI to mine unstructured research papers to fight COVID-19.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are working closely with organizations such as the University College London (UCL), which is conducting reviews of medical research reports.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;“One of our focuses as a research group is undertaking systematic reviews across a range of policy areas,” says Professor James Thomas at UCL, and Director of the &lt;A title="https://iris.ucl.ac.uk/iris/browse/researchgroup/1648" href="https://iris.ucl.ac.uk/iris/browse/researchGroup/1648" target="_blank" rel="noopener"&gt;EPPI-Centre’s Reviews Facility&lt;/A&gt; for the Department of Health, England. “We have been partnering with engineers at Microsoft and data scientists to build a ‘living’ reviews system – that automatically identifies relevant research for reviews as they are published. Text Analytics for health provides a powerful tool for extracting insights from clinical literature, with rich&amp;nbsp;support for a wide range of healthcare terminology so that we can more quickly and accurately identify relevant information.”&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;At Microsoft, our goal within healthcare is to empower peo&lt;/FONT&gt;ple and organizations to address the complex challenges facing the healthcare industry today, working closely with our customers and partners to bring healthcare solutions to life. We’re excited to make Text Analytics for health available in support of this mission.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Get started with Text Analytics for health&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Text Analytics for health is currently available in containers. With containers, you can deploy resources in your own development environment that meets your specific security and data governance requirements.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The container provides REST-based query prediction endpoint APIs. Below is an example API request and response body:&lt;/P&gt;
&lt;DIV&gt;&lt;IMG style="margin: auto; width: 80%;" src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203804i32CD2F3023BC0812" border="0" title="TA for health container GIF.gif" alt="TA for health container GIF.gif" /&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;For more resources:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Check out the Text Analytics &lt;A href="https://aka.ms/TAforHealth-Docs" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See Text Analytics in action with the &lt;A href="https://covid19search.azurewebsites.net/" target="_blank" rel="noopener"&gt;Covid-19 search engine demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/TAforHealth-Gating" target="_blank" rel="noopener"&gt;Contact us&lt;/A&gt; to try Text Analytics for health&lt;/LI&gt;
&lt;LI&gt;To learn more about healthcare solutions using Azure, visit &lt;A href="https://azure.microsoft.com/industries/healthcare/" target="_blank" rel="noopener"&gt;our website&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 08 Jul 2020 18:04:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-text-analytics-for-health/ba-p/1505152</guid>
      <dc:creator>hadasb</dc:creator>
      <dc:date>2020-07-08T18:04:11Z</dc:date>
    </item>
    <item>
      <title>Neural Text to Speech extends support to 15 more languages with state-of-the-art AI quality</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911</link>
      <description>&lt;H1&gt;Neural Text to Speech extends support to 15 more languages with state-of-the-art AI quality&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;EM&gt;This post was co-authored by Sheng Zhao, Jie Ding, Anny Dow, Garfield He and Lei He. &amp;nbsp;&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&amp;nbsp;part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural interfaces. Neural Text to Speech (Neural TTS) enables a wide range of scenarios, from audio content creation to natural-sounding voice assistants. Companies like the &lt;A href="https://customers.microsoft.com/en-us/story/754836-bbc-media-entertainment-azure" target="_blank" rel="noopener"&gt;BBC&lt;/A&gt; and &lt;A href="https://aka.ms/MotorolaSolutions" target="_blank" rel="noopener"&gt;Motorola Solutions&lt;/A&gt; are using Text to Speech in Azure to develop conversational interfaces for their voice assistants.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To make it possible for more developers to add natural-sounding voices to their applications and solutions, today, we’re building on our language support with 15 new Neural TTS voices along with significant voice quality improvements.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Language support extended with 15 new voices&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Our new Neural TTS voices include: Salma in Arabic (Egypt), Zariyah in Arabic (Saudi Arabia), Alba in Catalan (Spain), Christel in Danish (Denmark), Neerja in English (India), Noora in Finnish (Finland), Swara in Hindi (India), Colette in Dutch (Netherland), Zofia in Polish (Poland), Fernanda in Portuguese (Portugal), Dariya in Russian (Russia), Hillevi in Swedish (Sweden), Achara in Thai (Thailand), HiuGaai in Chinese (Cantonese, Traditional) and HsiaoYu in Chinese (Taiwanese Mandarin).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear samples of the voices, or try them with your own text in &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;our demo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="height: 1000px; width: 900px;" width="900"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="57px" scope="col" style="width: 120px; height: 57px; vertical-align: middle;"&gt;
&lt;P&gt;&lt;STRONG&gt;Locale code&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="57px" scope="col" style="width: 200px; height: 30px; vertical-align: middle;"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="57px" scope="col" style="width: 200px; height: 30px; vertical-align: middle;"&gt;
&lt;P&gt;&lt;STRONG&gt;Voice name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="57px" scope="col" style="width: 500px; height: 30px; vertical-align: middle;"&gt;
&lt;P&gt;&lt;STRONG&gt;Audio sample&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;ar-EG&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="357px" scope="row" style="width: 100px; height: 150px;"&gt;
&lt;P&gt;Arabic (Egypt)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“ar-EG-SalmaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;أتساءل ماذا يمكن ان يحدث لجسمك عندما تأكل الزنجبيل كل يوم لمدة شهر؟&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ar-EG.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;ar-SA&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Arabic (Saudi Arabia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“ar-SA-ZariyahNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;لديك نصف ساعة فقط؟&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ar-SA 4.63.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;ca-ES&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Catalan (Spain)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“ca-ES-AlbaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;L'obra és el retrat d'un moment històric de mobilització popular.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ca-ES.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;da-DK&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Danish (Denmark)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“da-DK-ChristelNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Halvfjerds procent af din krop består af vand&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/da-DK 4.73.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;en-IN&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;English (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“en-IN-NeerjaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;How about coming to the barbecue at the tennis club?&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/en-IN 4.35.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;fi-FI&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Finnish (Finland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“fi-FI-NooraNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Tavoitteena on lisätä kohtuuhintaisten vuokra-asuntojen määrää kasvukeskuksissa.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/fi-FI 4.7.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;hi-IN&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Hindi (India)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“hi-IN-SwaraNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;‘आयरन’ शब्द किस खेल से सम्बन्धित है ?&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/hi-IN 4.63.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;nl-NL&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Dutch (Netherland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“nl-NL-ColetteNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Alle oceanen zijn met elkaar verbonden en vormen samen één grote massa zout water.&amp;nbsp;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/nl-NL 4.58.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;pl-PL&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Polish (Poland)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“pl-PL-ZofiaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Wyjazd z Poznania planujemy o godzinie czwartej rano.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/pl-PL 4.55.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;pt-PT&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Portuguese (Portugal)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“pt-PT-FernandaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Amanhã vai estar tanto calor que vou à praia.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/pt-PT 4.89.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;ru-RU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Russian (Russia)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“ru-RU-DariyaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;В качестве примера он привел искусственный интеллект, беспилотную технику, генетику, медицину и образование.&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ruRU - 4.76.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;sv-SE&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Swedish (Sweden)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“sv-SE-HilleviNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;Ett kul och intressant avsnitt även för dig som inte var på plats!&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/sv-SE 4.83.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;th-TH&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Thai (Thailand)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“th-TH-AcharaNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;เขาทำด้วยหัวใจบริสุทธิ์และต้องการให้ความยุติธรรมแก่ประชาชน&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/th-TH 4.2.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;zh-HK&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Chinese (Cantonese, Traditional)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“zh-HK-HiuGaaiNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;了解該等基金的三大特點,有助投資者作出更明智的選擇。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/zh-HK 4.39.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="96.8182px" height="150px"&gt;
&lt;P&gt;&lt;STRONG&gt;zh-TW&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100px" height="150px"&gt;
&lt;P&gt;Chinese (Taiwanese Mandarin)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="129.545px" height="150px"&gt;
&lt;P&gt;“zh-TW-HsiaoYuNeural”&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="472.727px" height="150px"&gt;
&lt;P&gt;如果一個人從一所優秀大學畢業，可能意味他有能力做大事。&lt;/P&gt;
&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/zh-TW 4.29.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Text-to-speech quality is measured by Mean Opinion Score (MOS), a widely-recognized scoring method for speech quality evaluation. For MOS studies, participants rate speech characteristics such as sound quality, pronunciation, speaking rate, and articulation on a 5-point scale. According to several MOS tests we have done (n&amp;gt;50 for each study), the average MOS score for the 15 new Neural TTS voices is above &lt;STRONG&gt;4.1&lt;/STRONG&gt;, about &lt;STRONG&gt;+0.5&lt;/STRONG&gt; higher than the scores for standard (non-neural) voices.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;full language list&lt;/A&gt; for Neural TTS and standard voices. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Voice quality &lt;/STRONG&gt;&lt;STRONG&gt;and performance improved with state-of-the-art neural speech synthesis models&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS initially achieved near-human-parity on sentence reading using a recurrent neural network (RNN) based sequence-to-sequence model. Inspired by the Transformer model—a powerful sequence-to-sequence modeling architecture that advanced the state-of-the-art in neural machine translation (NMT), Microsoft researchers piloted the &lt;A href="https://arxiv.org/abs/1809.08895" target="_blank" rel="noopener"&gt;Transformer&lt;/A&gt; and &lt;A href="https://arxiv.org/abs/1905.09263" target="_blank" rel="noopener"&gt;FastSpeech&lt;/A&gt;&amp;nbsp;models on Neural TTS and saw significant improvements in performance and efficiency. The Transformer TTS model is based on the auto-regressive Transformer structure, which can produce speech output in the quality close to the actual human voices with 5x less training time. &lt;A href="https://www.microsoft.com/en-us/research/blog/fastspeech-new-text-to-speech-model-improves-on-speed-accuracy-and-controllability/" target="_blank" rel="noopener"&gt;FastSpeech&lt;/A&gt; is a new text-to-speech model that improves speech synthesis speed, accuracy, and controllability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="base model.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204045i0C3460D6831693E2/image-size/large?v=v2&amp;amp;px=999" role="button" title="base model.png" alt="New neural voice model creation based on teacher-student transfer learning" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;New neural voice model creation based on teacher-student transfer learning&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Multi-lingual and multi-speaker TTS recordings are first used to train a transformer base model.&amp;nbsp; To scale TTS development for many locales and voices, it is vital to have a highly agile development process. We built a "transformer teacher model" with 3,000+ hours of speech data from hundreds of speakers&amp;nbsp;in 50+ languages/locales —about 50x of a typical single language multi-speaker model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By using around 2 hours of a target speaker’s data, we can now adapt the multi-lingual multi-speaker transformer teacher model to generate a new high quality model for the speaker that sounds very similar to the original recording. &amp;nbsp;Then we can use a “finetuned teacher” to generate training data with rich content coverage to train a FastSpeech “student” for deployment that achieves the same quality as its finetuned teacher.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this powerful multi-lingual model, we are also able to take the voice samples from one speaker in one language as input and transfer the voice into another target language, without losing quality.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_15" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;With the Transformer and FastSpeech models, key improvements include:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Quality enhancements&lt;/STRONG&gt;: The new models achieved significant MOS improvements over the previous robust LSTM-based Neural TTS models in our platform. For example, we did a side-by-side comparison on de-DE Kajta voice; the new model shows +0.4 comparative MOS gain over the baseline.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Higher performance&lt;/STRONG&gt;:&amp;nbsp;With the new models, users can get high quality Neural TTS output with faster response time. FastSpeech “students” have &lt;STRONG&gt;10X&lt;/STRONG&gt; inference speedup on mel-spectrogram generation using M60 GPUs compared to our previous production systems. Neural TTS can run 40% faster on a Kubernetes GPU Pod. We can also run Neural TTS on CPU with 0.06 RTF (Real Time Factor), which means 1 second of audio can be generated in 60ms on a Kubernetes CPU Pod.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Language-specific improvements&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When developing Neural TTS for new languages, there are also language-specific challenges that need to be addressed to ensure high voice quality and performance.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, to make synthetic speech sound humanlike, it is critical to get pitch accents right. Japanese (ja-JP) poses challenges for speech synthesis because of its complicated pitch accents. However, most end-to-end TTS systems cannot perform well on pitch accents; we found that about 60% of production system's problems in Japanese synthesis are related to intonation and accents.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_16" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Accent model.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/204047i1ED84F1BA412F0B1/image-size/large?v=v2&amp;amp;px=999" role="button" title="Accent model.png" alt="Language-specific pitch accent prediction model" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Language-specific pitch accent prediction model&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We built a transformer model to predict and account for pitch accent related features. The accent model predicts accent phrase boundaries and accent type information, and these accent features are introduced into the acoustic model. The teacher model and student model will use the accent features in training and synthesis.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the pitch accent features, the voice quality improves significantly. Our MOS test shows that the new ja-JP voice, Nanami, has a &lt;STRONG&gt;+0.3&lt;/STRONG&gt; improvement in MOS score compared to the previous production system.&amp;nbsp;This method is also applicable to other languages with pitch accents.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here are some samples:&lt;/P&gt;
&lt;TABLE style="width: 900px;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="300px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="217.727px" scope="col" style="width: 300px;"&gt;
&lt;P&gt;Sample of the old model without pitch accent support&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="381.364px" scope="col" style="width: 300px;"&gt;
&lt;P&gt;Sample of new model with pitch accent support&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="300px"&gt;
&lt;P&gt;1日2食に切り替える予定だ。&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="217.727px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00001_old.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="381.364px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00001_new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="300px"&gt;
&lt;P&gt;被災地には僕らの番組のため今も毎週のように行っています。&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="217.727px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00002_old.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="381.364px"&gt;&lt;AUDIO controls="controls" data-mce-fragment="1"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/00002_new.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create a custom voice with Neural TTS technology &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The latest technical advancements in Neural TTS are also available in the&amp;nbsp;&lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt;&amp;nbsp;capability, enabling organizations to create a unique brand voice in multiple languages with 5-10X less data.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR3f_-mitwQlFp-aY9u7mCfFUQjJSQ09NMkY1QVRDTU4yNjRUVzBEREVGVCQlQCN0PWcu" target="_blank" rel="noopener"&gt;Learn more about the process for getting started with Custom Neural Voice&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers. Text to Speech on Azure has more than&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;70 standard voices in over 40 languages&lt;/A&gt;&amp;nbsp;and locales in addition to our growing list of&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;Neural TTS voices&lt;/A&gt;. &amp;nbsp;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For more information:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Try the TTS&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#features" target="_blank" rel="noopener"&gt;demo&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;See our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Check out our &lt;A href="https://github.com/Azure-Samples/cognitive-services-speech-sdk" target="_blank" rel="noopener"&gt;sample code&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 08 Jul 2020 14:19:53 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-07-08T14:19:53Z</dc:date>
    </item>
    <item>
      <title>Accelerate labeling productivity by using AML Data Labeling</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-labeling-productivity-by-using-aml-data-labeling/ba-p/1479869</link>
      <description>&lt;P&gt;Labeled data is critical to training supervised learning models. Higher volumes and more accurate labeled data contribute to more accurate models but labeling data has traditionally been time-intensive and error-prone.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With Data Labeling in Azure Machine Learning, you now have a central place to create, manage, and monitor labeling projects. You&lt;/SPAN&gt; can now manage data labeling projects seamlessly from within the studio web experience to generate and manage tasks reducing the back-and-forth of labelling data offline.&amp;nbsp;With AML Data Labeling, you can load and label data and be ready to train in minutes.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;To increase productivity and decrease costs for a given task, the Assisted Machine Learning labeling&amp;nbsp;feature allows you to leverage automatic machine learning models to accelerate labeling by clustering like objectives and automatically prelabeling data when the underlying model has reached high confidence. This feature is available for image classification (multi-class or multi-label) and Object detection tasks, in Enterprise edition workspaces.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1593759641937.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203048i57BA15361E9BE75E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Vijai_Kannan_0-1593759641937.png" alt="Vijai_Kannan_0-1593759641937.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data Labeling in Azure Machine learning now includes below capabilities:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Image Classification Multi-Class&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This project type helps you to categorize an image when you want to apply only a&amp;nbsp;single class&amp;nbsp;from a set of classes to an image.&amp;nbsp; &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1593755650807.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203045i6F1287D300AF4EEF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Vijai_Kannan_0-1593755650807.png" alt="Vijai_Kannan_0-1593755650807.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Image Classification Multi-label&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This project type allows you to categorize an image when you want to apply one or more&amp;nbsp;labels from a set of classes to an image. For instance, a photo of a dog might be labeled with both&amp;nbsp;dog&amp;nbsp;and&amp;nbsp;land.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1593757313332.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/203047iCB4A9427B18B195F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Vijai_Kannan_0-1593757313332.png" alt="Vijai_Kannan_0-1593757313332.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Object Identification (Bounding Box)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Use this project type when you want to assign a class and a bounding box to each object within an image.&amp;nbsp;If your project is of type "Object Identification (Bounding Boxes)," you'll specify one or more bounding boxes in the image and apply a tag to each box. Images can have multiple bounding boxes, each with a single tag.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_3-1593099333352.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/201097i80DB513BE6F32AA0/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Vijai_Kannan_3-1593099333352.png" alt="Vijai_Kannan_3-1593099333352.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Assisted machine learning&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The&amp;nbsp;&lt;STRONG&gt;machine assisted labeling&lt;/STRONG&gt;&amp;nbsp;lets you trigger automatic machine learning models to accelerate the labeling task. At the beginning of your labeling project, the images are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your images are of a single class, then approximately 80% of the data used to train the model will be of that class. This training does not include active learning.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Enabling ML assisted labeling&lt;/EM&gt;&amp;nbsp;consists of two phases:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Clustering&lt;/LI&gt;
&lt;LI&gt;Prelabeling&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The exact number of labeled images necessary to start assisted labeling is not a fixed number. This can vary significantly from one labeling project to another. ML Assisted Labeling uses a technique called&amp;nbsp;&lt;EM&gt;Transfer Learning&lt;/EM&gt;, and the pre-labeling will be triggered when sufficient confidence is achieved which varies based on the dataset.&lt;/P&gt;
&lt;P&gt;Since the final labels still rely on input from the labeler, this technology is sometimes called&amp;nbsp;&lt;EM&gt;human in the loop&lt;/EM&gt;&amp;nbsp;labeling.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;STRONG&gt;Clustering&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;After a certain number of labels are submitted manually, the machine learning model for image classification starts to group together similar images. These similar images are presented to the labelers on the same screen to speed up manual tagging. Clustering is especially useful when the labeler is viewing a grid of 4, 6, or 9 images.&lt;/P&gt;
&lt;P&gt;The clustering phase does not appear for object detection models.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&lt;STRONG&gt;Prelabeling&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;After enough image labels are submitted, a classification model is used to predict image tags. Or an object detection model is used to predict bounding boxes. The labeler now sees pages that contain predicted labels already present on each image. For object detection, predicted boxes are also shown. Accuracy will vary depending images, labels, the domain, and other factors. With Pre-Labeling, you can review the predictions before committing the labels. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once a machine learning model has been trained on your manually labeled data, the model is evaluated on a test set of manually labeled images to determine its accuracy at a variety of different confidence thresholds. This evaluation process is used to determine a confidence threshold above which the model is accurate enough to show pre-labels. The model is then evaluated against unlabeled data. Images with predictions more confident than this threshold are used for pre-labeling.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&lt;STRONG&gt;Resources&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Learn more about the&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Get started with a&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/trial/get-started-machine-learning/" target="_blank" rel="noopener"&gt;free trial of the Azure Machine Learning service&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jul 2020 04:31:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerate-labeling-productivity-by-using-aml-data-labeling/ba-p/1479869</guid>
      <dc:creator>Vijai_Kannan</dc:creator>
      <dc:date>2020-07-07T04:31:02Z</dc:date>
    </item>
    <item>
      <title>Gain deeper insights from customer reviews using Opinion Mining</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/gain-deeper-insights-from-customer-reviews-using-opinion-mining/ba-p/1501819</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Product reviews,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;social media posts,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;discussion&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;forums are&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;major&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;source of information for businesses&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;for analyzing customer feedback&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Natural Language Processing&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;(NLP)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;specifically sentiment analysis,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;provides&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;an automated way to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;categorize&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;this feedback&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;into&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;positive, neutral, and negative categories.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Using this information&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, businesses can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;identify&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;trends in customer sentiment,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;find what drives customer satisfaction&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and react to negative&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;feedback&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;release of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;new&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;O&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;pinion&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;M&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ining feature&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Text Analytics&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, we&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;provide a new tool&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;analyzing&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;user-generated content&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. Opinion Mining is an implementation of aspect-based&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sentiment&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;analysis&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;which&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;go&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;es&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;beyond&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;identifying&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sentiment&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;provide&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;more insights&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;from&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;text&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;data&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. Using this new tool, businesses can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;extract&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;customers’&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sentiment and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;opinions&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;around specific&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;aspect&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;s or attributes&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;of a product&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;or service&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For example,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;review&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;comment&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;regarding a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;hotel stay&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;such as&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;“&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;I like&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;d&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;pool&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, but&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;staff&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;was&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;un&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;friendly&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;”&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Opinion Mining would identify the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;TABLE class=" lia-align-center" data-tablestyle="MsoTableGrid" data-tablelook="1184"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Aspect&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Opinion&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Sentiment&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Pool&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Liked&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Positive&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Staff&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Unf&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;riendly&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Negative&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-left"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;Aggregating th&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;is data can highlight trends and provide&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;deeper&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;understanding&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;&amp;nbsp;of&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;&amp;nbsp;customer sentiment&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;, breaking down&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;&amp;nbsp;the&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;data&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;&amp;nbsp;into specific areas the business can dive into&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW57706319 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW57706319 BCX0"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW57706319 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="OpinionMiningGraph.JPG" style="width: 854px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/202641i0C4F7CDFA850A6EB/image-size/large?v=v2&amp;amp;px=999" role="button" title="OpinionMiningGraph.JPG" alt="OpinionMiningGraph.JPG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;SPAN class="TextRun SCXW94205807 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW94205807 BCX0"&gt;Image showing multiple aspects&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW94205807 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW94205807 BCX0"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and opinions&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="TextRun SCXW94205807 BCX0" data-contrast="auto"&gt;&lt;SPAN class="NormalTextRun SCXW94205807 BCX0"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;from hotel reviews&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="EOP SCXW94205807 BCX0" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559731&amp;quot;:720,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Opinion Mining&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;can add greater depth to understanding of customer sentiment and provide&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;a more granular view of the data&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;voice&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;customer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;customer feedback analytics&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, and&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;call center analytics&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;solutions.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;C&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ustomer feedback that mentions a competitor’s product features can be used to perform competitive analysis.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;And&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;c&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ombined with&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;customer&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;purchase&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;history and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;profile data from CRM systems&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, it&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;can provide a powerful way to analyze&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and guide&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;product&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;design and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;launch,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;brand strategy,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;advertising and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;marketing campaigns.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For more information on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;how to use&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sign up and use&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Opinion Mining, see the&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3#opinion-mining" target="_self"&gt;documentation&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Jul 2020 16:07:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/gain-deeper-insights-from-customer-reviews-using-opinion-mining/ba-p/1501819</guid>
      <dc:creator>raymondlag</dc:creator>
      <dc:date>2020-07-01T16:07:46Z</dc:date>
    </item>
    <item>
      <title>COVID-19 “Back to Work” Solution Template using Microsoft Healthcare Bot, Azure and Teams</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/covid-19-back-to-work-solution-template-using-microsoft/ba-p/1460240</link>
      <description>&lt;P class="lia-align-left"&gt;&lt;FONT color="#000000"&gt;&lt;STRONG&gt;&lt;EM&gt;Update&lt;/EM&gt;&lt;EM&gt;: This blog was last edited on 8/4/2020. This blog outlines using Healthcare Bot for COVID-19 Back to Work use case with authentication and data persistence in Azure API for FHIR or Azure SQL Database. The template in Healthcare Bot's template catalog has been updated to a simple version without data persistence (not covered in this blog).&lt;/EM&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="lia-align-center"&gt;&lt;STRONG&gt;&lt;U&gt;What is the&amp;nbsp;COVID-19 “Back to Work” Solution Template?&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;As countries worldwide seek to re-open their economies by relaxing “stay-at-home” orders, many employers are considering how to prepare their facilities and employees for the return to physical workplace. To do this as safely as possible, it is critical to monitor employees for common COVID-19 symptoms and provide a simple way for affected employees to physically return once they are cleared to do so. Microsoft has developed a special solution template to enable employers worldwide to easily create and deploy Microsoft technologies to scale and automate those critical steps for return to the workplace. We call it the “Back-to-Work” solution template. In this blog, we will discuss the solution and how you can use it to empower a safer return to the workplace for your organization.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;U&gt;&lt;STRONG&gt;Key Elements and Steps&lt;/STRONG&gt;&lt;/U&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Our COVID-19 Back-to-Work solution template is built on Microsoft Healthcare Bot service and the Microsoft Azure platform. Some key elements include:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Symptom tracking for essential workers who have been exposed to COVID (physicians, nurses, ancillary staff, volunteers) (Please refer to&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://blogs.microsoft.com/on-the-issues/2020/04/20/privacy-covid-19-data-collection/" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;Microsoft Privacy Principles for data collection during COVID-19 crisis&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Alignment with&lt;/SPAN&gt;&lt;SPAN&gt; CDC guidelines*&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Organizational configurable documentation of information such as symptoms, occupational exposure type, and testing status&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Allows systems to determine when employees will be safe to return to patient care activities, the workplace, or campus&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Supported by Microsoft’s industry leading compliance, privacy, and security portfolio (&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/trust-center/cloudservices/health" target="_blank" rel="noopener"&gt;&lt;SPAN&gt;link&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;I&gt;*CDC guidelines are rapidly evolving. Microsoft does not guarantee to keep this solution updated&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The solution&amp;nbsp;template&amp;nbsp;involves three steps:&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3steps.png" style="width: 573px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/198497i82F8536F6B27D158/image-dimensions/573x317?v=v2" width="573" height="317" role="button" title="3steps.png" alt="3steps.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;STRONG&gt;&lt;U&gt;Reference Architecture of Microsoft platform support&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The COVID-19 Back-to-Work solution template is an &lt;STRONG&gt;ACCELERATOR KIT&lt;/STRONG&gt; to help you quickly build and deploy a custom solution for your organization. Organizations across all industries, such as healthcare, education, retail, manufacturing, and financial services can use this highly customizable template. Microsoft’s platform provides the necessary capabilities by combining our Healthcare Bot service with the Azure platform and Microsoft Teams as shown below:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="RefArch.png" style="width: 569px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/210311i5BFDFAE2D2BA0757/image-dimensions/569x310?v=v2" width="569" height="310" role="button" title="RefArch.png" alt="RefArch.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With the Microsoft Healthcare Bot service at its core, the deployed solution will allows you to instrument the health bot in multiple UI channels including a Website, Microsoft Teams, Twilio, Facebook and Telegram. (We plan to release configuration for building a mobile native app soon). The COVID-19 Back-to-Work logic asks a set of questions on COVID-19 exposure, symptoms and lab tests. Based on responses, each individual is then directed to either stay at home or is cleared to enter the workplace. Information from your preferred data store can be easily visualized using Power BI, thanks to the integrated Power BI model available with the solution.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Using Microsoft Healthcare Bot service to write user responses to Azure API for FHIR is our primary recommendation for healthcare organizations to provide data interoperability from different health systems.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Details on using the healthcare bot with Azure API for FHIR is provided in Additional Resources section below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;U&gt;&lt;STRONG&gt;COVID-19 Back to Work Talk-Track&lt;/STRONG&gt;&lt;/U&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Watch the video below to get a quick walk-through of the solution overview.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;LI-VIDEO vid="https://youtu.be/icCmDgx3dZc" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/icCmDgx3dZc/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;STRONG&gt;&lt;U&gt;Typical end-to-end Workflow&lt;/U&gt;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;One typical end-to-end flow can be summarized as:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;Company A’s IT Administrator configures a time-based trigger to send email notifications to all employees required to take daily screening&amp;nbsp;assessment&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Users click on the bot link, register themselves and take the day’s screening&amp;nbsp;assessment&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;User responses are captured and stored in Company A’s Azure tenant in the chosen backend (Azure API for FHIR or Azure SQL Database or Azure Cosmos Database)&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;IT Administrator can view metrics on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL class="lia-list-style-type-circle"&gt;
&lt;LI&gt;&lt;SPAN&gt;Users who took assessment vs missed/left incomplete&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Users who were asked to stay home vs who were cleared to enter the workplace&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Predict when a large employee-base is coming back to workplace and make necessary arrangements to ensure their safe return&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Users get an automated email notification every day. Users click on the bot link, login and complete the day’s screening assessment.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;For a brief demo of this end-to-end workflow, watch the video below.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;LI-VIDEO vid="https://youtu.be/cQ3EhAbfTJ8" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/cQ3EhAbfTJ8/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;U&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/U&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;To use the Accelerator Kit with Azure SQL Database persistence and rest of the building blocks, a GitHub repository is available at&amp;nbsp;&lt;A href="https://github.com/microsoft/covid19-BackToWork" target="_blank" rel="noopener"&gt;https://github.com/microsoft/covid19-BackToWork&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;To use Back-to-Work template with Azure API for FHIR, refer &lt;A href="https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/using-the-covid-19-back-to-work-template-in-microsoft-healthcare/ba-p/1425833" target="_blank" rel="noopener"&gt;step-by-step instruction guide&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;For a quick start guide on Healthcare Bot service, refer&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/updated-on-5-24-2020-quick-start-setting-up-your-covid-19/ba-p/1230537?_lrsc=6c7bb47f-50aa-460b-85e0-dd9a1ca23fb3" target="_blank" rel="noopener"&gt;step-by-step quickstart&lt;/A&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="paragraph" style="margin: 0in; margin-bottom: .0001pt; vertical-align: baseline;"&gt;Stay tuned for&amp;nbsp;more&amp;nbsp;updates on the solution. Thanks for reading and let us know how we can help. Thank you!&amp;nbsp;&lt;/P&gt;
&lt;P class="paragraph" style="margin: 0in; margin-bottom: .0001pt; vertical-align: baseline;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="paragraph" style="margin: 0in; margin-bottom: .0001pt; vertical-align: baseline;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nikita_LinkedIn.jfif" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/179091i12B64717A92FA105/image-size/small?v=v2&amp;amp;px=200" role="button" title="Nikita_LinkedIn.jfif" alt="Nikita_LinkedIn.jfif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/nikitapitliya/" target="_blank" rel="noopener"&gt;Nikita Pitliya&lt;/A&gt;&amp;nbsp;, Microsoft Senior Solution Architect&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Han.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/198500iB04CE6FC7B05957A/image-size/small?v=v2&amp;amp;px=200" role="button" title="Han.jpg" alt="Han.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/han-zhang-csa-msft/" target="_blank" rel="noopener"&gt;Han Zhang&lt;/A&gt;, Microsoft Cloud Solution Architect&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Julian.jpg" style="width: 201px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/198502i6542AC0081689BDE/image-dimensions/201x201?v=v2" width="201" height="201" role="button" title="Julian.jpg" alt="Julian.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/juliansoh/" target="_blank" rel="noopener"&gt;Julian Soh&lt;/A&gt; , Microsoft Principal Solution Architect&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Joe.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/198503iDF7406FD7DB91D21/image-size/small?v=v2&amp;amp;px=200" role="button" title="Joe.jpg" alt="Joe.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/jkarasha/" target="_blank" rel="noopener"&gt;Joe&amp;nbsp;Karasha&lt;/A&gt; , Microsoft Senior Solution Architect&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Greg.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/198505iEEABB180C8878DE0/image-size/small?v=v2&amp;amp;px=200" role="button" title="Greg.jpg" alt="Greg.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/gregbeaumont/" target="_blank" rel="noopener"&gt;Greg Beaumont&lt;/A&gt; , Microsoft Senior Technical Specialist&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Srini_LinkedIn.jpg" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191100iDA92746C7B6B494E/image-size/small?v=v2&amp;amp;px=200" role="button" title="Srini_LinkedIn.jpg" alt="Srini_LinkedIn.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.linkedin.com/in/srini-s-541912/" target="_blank" rel="noopener"&gt;Srini Surendranath&lt;/A&gt; , Microsoft WW Customer Lead&lt;/P&gt;</description>
      <pubDate>Wed, 05 Aug 2020 00:26:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/covid-19-back-to-work-solution-template-using-microsoft/ba-p/1460240</guid>
      <dc:creator>nikitapitliya</dc:creator>
      <dc:date>2020-08-05T00:26:10Z</dc:date>
    </item>
    <item>
      <title>Using GitHub Actions &amp; Azure Machine Learning for MLOps</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/using-github-actions-amp-azure-machine-learning-for-mlops/ba-p/1419027</link>
      <description>&lt;H2&gt;Background&lt;/H2&gt;
&lt;P&gt;Like &lt;A href="https://www.weave.works/blog/what-is-gitops-really" target="_self"&gt;Gitops&lt;/A&gt;, Machine Learning Operations (or MLOps) can make significant improvements in accelerating how data scientists can impact organizational needs. A well-implemented MLOps process not only speeds the time from code to production, but also provides ownership, lineage and historical information, critical for understanding the performance of any machine learning model. Critical to this process is a CI/CD system that understands the elements of ML natively as well as stays in sync with any code or data changes, no matter what platform organizations need them to run on.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Unfortunately, many data scientists are still forced to implement MLOps manually. Oftentimes CI/CD platforms are powerful, but quite generic, requiring the implementation of “ML aware” through custom code. And, worse, these platforms often require separating the actions from the code, leading to difficulty in debugging and hard to reproduce caching issues. Our goal is to both give these data scientists tools that are easy to implement and use.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;GitHub Actions and Azure Machine Learning To The Rescue&lt;/H2&gt;
&lt;P&gt;Today, we’re proud to announce a series of GitHub Actions designed to allow people to implement MLOps with just a few configuration settings, but they are flexible enough to support even complicated workflows. Now, by just checking in your code or opening a pull request, you can kick off an entire ML pipeline, recording all information about the process, and updating that model from the actions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first five functions we have published are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/aml-workspace" target="_self"&gt;aml-workspace&lt;/A&gt; - Login action to login / connect with Azure Machine Learning&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/aml-compute" target="_self"&gt;aml-compute&lt;/A&gt; - Create Compute action to create compute for Azure Machine Learning will allow you to create a new compute target on Azure Machine Learning&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/aml-run" target="_self"&gt;aml-run&lt;/A&gt; - Train action for training machine learning models using Azure Machine Learning&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/aml-registermodel" target="_self"&gt;aml-registermodel&lt;/A&gt; - Save a model to Azure Machine Learning&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/Azure/aml-deploy" target="_self"&gt;aml-deploy&lt;/A&gt; - Deploy action to deploy your model on Azure Machine Learning and creates a real-time endpoint for use in other systems.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;NOTE&lt;/STRONG&gt;: Though these are all Azure Machine Learning functions, GitHub Actions for MLOps support any cloud.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;These actions are based on &lt;A href="https://azure.microsoft.com/overview/what-is-devops/" target="_self"&gt;DevOps principles and practices&lt;/A&gt; that increase the efficiency of workflows. For example, continuous integration, delivery, and deployment. We have applied these principles to the machine learning process with the goal of:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Faster experimentation and development of models&lt;/LI&gt;
&lt;LI&gt;Faster deployment of models into production&lt;/LI&gt;
&lt;LI&gt;Quality assurance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Not only that, because the entire MLOps as a service is hosted and run on behalf of the users, it frees up time for the ML Engineers to focus on more business critical issues. Additionally, workflows can be updated and added on the back end without the users even knowing, making maintenance of these pipelines even easier.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;To show you end-to-end what this would like, we have a short video:&lt;BR /&gt;&lt;IFRAME src="https://www.youtube-nocookie.com/embed/bmFr0LYo_6o" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Walk-through&lt;/H2&gt;
&lt;P&gt;Let’s walk you through how you would implement something like this.&lt;/P&gt;
&lt;P&gt;Using GitHub Actions and Azure Machine Learning&lt;/P&gt;
&lt;P&gt;First, you’ll need some initial setup variables. These include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure subscription&lt;/LI&gt;
&lt;LI&gt;Contributor access to the Azure subscription&lt;/LI&gt;
&lt;LI&gt;Access to &lt;A href="https://github.com/features/actions" target="_self"&gt;GitHub Actions&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If you don’t have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning today.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Second, create your own repository from the template. You can do this"Use this template" button in the repo: &lt;A href="https://aka.ms/ml-template​" target="_blank" rel="noopener"&gt;https://aka.ms/ml-template​&lt;/A&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorDavid_Aronchick_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="68747470733a2f2f68656c702e6769746875622e636f6d2f6173736574732f696d616765732f68656c702f7265706f7369746f72792f7573652d746869732d74656d706c6174652d627574746f6e2e706e67.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/194601i6A6BB33181A25F0E/image-size/large?v=v2&amp;amp;px=999" role="button" title="68747470733a2f2f68656c702e6769746875622e636f6d2f6173736574732f696d616765732f68656c702f7265706f7369746f72792f7573652d746869732d74656d706c6174652d627574746f6e2e706e67.png" alt="68747470733a2f2f68656c702e6769746875622e636f6d2f6173736574732f696d616765732f68656c702f7265706f7369746f72792f7573652d746869732d74656d706c6174652d627574746f6e2e706e67.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Third, you’ll need a service principal with contributor rights to a resource group (either new or existing). To create a new one on Azure, use the Azure CLI on your computer and execute the following command to generate the required credentials:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;# Replace {service-principal-name}, {subscription-id} and {resource-group} with your &lt;BR /&gt;# Azure subscription id and resource group name and any name for your service principle&lt;BR /&gt;az ad sp create-for-rbac --name {service-principal-name} \&lt;BR /&gt;--role contributor \&lt;BR /&gt;--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \&lt;BR /&gt;--sdk-auth&lt;/PRE&gt;
&lt;P&gt;&lt;BR /&gt;This will generate the following JSON output:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;{&lt;BR /&gt;"clientId": "&amp;lt;GUID&amp;gt;",&lt;BR /&gt;"clientSecret": "&amp;lt;GUID&amp;gt;",&lt;BR /&gt;"subscriptionId": "&amp;lt;GUID&amp;gt;",&lt;BR /&gt;"tenantId": "&amp;lt;GUID&amp;gt;",&lt;BR /&gt;(...)&lt;BR /&gt;}&lt;/PRE&gt;
&lt;P&gt;Add this JSON output as &lt;A href="https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#creating-encrypted-secrets" target="_self"&gt;a secret&lt;/A&gt; with the name AZURE_CREDENTIALS in your GitHub repository:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="secrets.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/194600i4008C5B8312CCE3D/image-size/large?v=v2&amp;amp;px=999" role="button" title="secrets.png" alt="secrets.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Please follow &lt;A href="https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#creating-encrypted-secrets" target="_self"&gt;this link&lt;/A&gt; for more details.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Next, modify the parameters in the &lt;STRONG&gt;/.cloud/.azure/workspace.json&lt;/STRONG&gt;&amp;nbsp;file in your repository, so that the GitHub Actions create or connect to the desired Azure Machine Learning workspace.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Once you save your changes to the file, the predefined GitHub workflow that trains and deploys a model on Azure Machine Learning gets triggered. Check the actions tab to view if your actions have successfully run.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="David_Aronchick_2-1590512009969.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/194472i453102EB654D737E/image-size/large?v=v2&amp;amp;px=999" role="button" title="David_Aronchick_2-1590512009969.png" alt="David_Aronchick_2-1590512009969.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Now that you have a running pipeline, you can start modifying the code in the &lt;A href="https://github.com/machine-learning-apps/ml-template-azure/blob/master/code" target="_self"&gt;code folder&lt;/A&gt; so that the pipeline uses your custom code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With just a few configuration settings, you can move from zero to an entire code &amp;amp; GitHub Action driven workflow. In addition to the above actions, we are also publishing two templates that include code and workflow definitions for an end to end ML/AI lifecycle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Simple template repository: &lt;A href="https://github.com/machine-learning-apps/ml-template-azure" target="_self"&gt;ml-template-azure&lt;/A&gt;&lt;BR /&gt;Go to this template and follow the getting started guide to set up an ML Ops process within minutes and learn how to use the Azure Machine Learning GitHub Actions in combination. This template demonstrates a very simple process for training and deploying machine learning models.&lt;/LI&gt;
&lt;LI&gt;Advanced template repository: &lt;A href="https://github.com/Azure/aml-template" target="_self"&gt;aml-template&lt;/A&gt;&lt;BR /&gt;This template demonstrates how approval processes can be included in the process and how training and deployment workflows can be split. It also shows how workflows (e.g. deployment) can be triggered by pull requests. More enhancements will be added to this template in the future to make it more enterprise-ready.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please dive into either repo and let us know if there’s anything we can do to help you achieve your goals with MLOps.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Further, though we’ve implemented the first version using Azure Machine Learning, the platform is flexible enough to support most deployment platforms, both on-prem and on any cloud. Just clone our template repo and customize on your own. And make sure to publish your actions to the GitHub Marketplace so that others can use it!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, we very much want to build a community around these actions - please join us at &lt;A href="https://aka.ms/ml-template" target="_blank" rel="noopener"&gt;https://aka.ms/ml-template&lt;/A&gt; (for the standard template) or &lt;A href="https://aka.ms/ml-template-advanced" target="_blank" rel="noopener"&gt;https://aka.ms/ml-template-advanced&lt;/A&gt; (for the advanced template) to file issues, pull requests and comments about what we can do better. Thank you so much!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;-- &lt;A href="https://twitter.com/MathesonZander" target="_self"&gt;Zander&lt;/A&gt;, &lt;A href="https://twitter.com/Marvin_Buss" target="_self"&gt;Marvin&lt;/A&gt;, Pulkit &amp;amp; &lt;A href="https://twitter.com/aronchick" target="_self"&gt;Dave&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 27 May 2020 01:18:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/using-github-actions-amp-azure-machine-learning-for-mlops/ba-p/1419027</guid>
      <dc:creator>David_Aronchick</dc:creator>
      <dc:date>2020-05-27T01:18:21Z</dc:date>
    </item>
    <item>
      <title>Batch Inference in Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/batch-inference-in-azure-machine-learning/ba-p/1417010</link>
      <description>&lt;P&gt;Today, we are announcing the general availability of&amp;nbsp;Batch Inference in&amp;nbsp;&lt;A href="https://azure.microsoft.com/services/machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt;, a new solution called ParallelRunStep that allows customers to get inferences for terabytes of structured or unstructured data using the power of the cloud. ParallelRunStep provides parallelism out of the box and makes it extremely easy to scale fire-and-forget inference to large clusters of machines, thereby increasing development productivity and decreasing end-to-end cost.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Batch inference is now being widely applied to businesses, whether to segment customers, forecast sales, predict customer behaviors, predict maintenance, or improve cyber security. It is the process of generating predictions on a high volume of instances without the need of instant responses. The predictions are stored and accessible for further usage. Often times, data scientists and engineers want to generate many predictions at once. But it’s challenging to take advantage of scalable compute resource to parallelize the large workload to achieve this. For example, how to partition the large amounts of input data? How to distribute and coordinate workloads across a cluster of machines? How to consolidate the output results? How to manage the machine cluster to avoid unnecessary cost? What if a task fails or machine dies?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;ParallelRunStep, the new solution we provide for batch inference, handles all these for you. It simplifies scaling up and out large machine learning workloads so data scientists and engineers can spend less time developing computer programs and focus on business objectives.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;ParallelRunStep includes capabilities that enable parallel processing and easy management of your large workloads:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Managed stack with out-of-the-box parallelism&lt;/LI&gt;
&lt;LI&gt;Resiliency with failure tolerance&lt;/LI&gt;
&lt;LI&gt;Fully composable with Azure Machine Learning Pipelines&lt;/LI&gt;
&lt;LI&gt;Flexible design for a variety of workloads&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Managed stack with out-of-the-box parallelism&lt;/H2&gt;
&lt;P&gt;ParallelRunStep is a managed stack with out-of-the-box parallelism. You only need to provide full data inputs, scoring script, and necessary configures, the left will be taken care of.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="tracych_0-1590463172382.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/194350iFEB6A19912D2FF05/image-size/large?v=v2&amp;amp;px=999" role="button" title="tracych_0-1590463172382.png" alt="tracych_0-1590463172382.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Partition input data&lt;/H3&gt;
&lt;P&gt;Let’s start with input data partitioning. You have 10K invoices to be extracted using your form recognizer model. To parallelize the workload, you want to combine 10 invoices as a mini batch and send to one machine for execution. You will have 1000 mini batches in total. In another use case, you have a 5GB csv file containing customer data to predict churn score per each record. You need to split the big file into smaller ones for parallel processing. For example, 500 files with the size of 10MB for each. With ParallelRunStep, you can easily achieve both above partition strategy.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;ParallelRunStep accepts data inputs through Azure Machine Learning datasets. Dataset is a resource for exploring, transforming, and managing data. Partition your data by setting the mini batch size and leveraging the two types of dataset.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;FileDataset represents single or multiple files with any format. For the invoice extraction case, use FileDataset and define mini batch size as 10 will automatically partition the workload into 1000 mini batches.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TabularDataset represents data in a tabular format by parsing the provided file or list of files. It can be created from csv, tsv, parquet files, SQL query results, etc. For the churn score calculation case, use TabularDataset and specify mini batch size as 10MB to get 500 mini batch workloads created.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Distribute workloads to managed machine cluster&lt;/H3&gt;
&lt;P&gt;After the entire input data is partitioned into multiple mini batches, ParallelRunStep distributes the mini batch workloads to a managed Azure Machine Learning compute cluster.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure Machine Learning compute cluster is created and managed by Azure Machine Learning. It can be auto scaled each time you run a job. Such autoscaling ensures that machines are shut down when your job is completed to save your cost. It supports for both CPU and GPU resources. You can also choose low priority virtual machines to save resources for other latency sensitive jobs and reduce your costs further.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One mini batch workload will be sent to one compute node for execution. The more compute nodes you use, the more parallelism you will get. Execution efficiency for each mini batch workload is determined by the power of the compute node.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Consolidate the results&lt;/H3&gt;
&lt;P&gt;Processed results from each mini batch workload are collected, stored, and made available to you for further analysis. You will also get a job run summary and a performance report.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Resiliency with failure tolerance&lt;/H2&gt;
&lt;P&gt;ParallelRunStep is a resilient and highly available solution. While the system manages the strategy, you also have the control of when to timeout your job, how many times to retry and how many errors to tolerant.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Timeout setting is provided per mini batch workload invocation. You can tune this timeout properly to fail fast and avoid waiting forever in case of failures.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Automatic retry is embedded.The system by default retries three times for each mini batch workload if it fails and you can customize the max retry count. Even with several failed mini batch workloads, you can still get the partial results from other successful mini batch workloads.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can control how many errors to tolerant through the error threshold configure. It’s defined as the number of files or records processing errors the entire job can tolerant. When the error threshold is reached, your job will be terminated. You can also choose to ignore all errors and allow your job to process all inputs.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Fully composable in Azure Machine Learning Pipelines&lt;/H2&gt;
&lt;P&gt;ParallelRunStep is available through Azure Machine Learning pipelines. Pipelines are constructed from multiple steps, which are distinct computational units in the pipeline. ParallelRunStep is one of such steps. Existing Azure ML Pipeline customers can easily add or switch to ParallelRunStep to run batch inference.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A pipeline is reusable after it is designed and published. Defining the input dataset or parameters as the type PipelineParameter gives you the ability to use dynamic input for each run, and fine tune your pipeline for better performance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can run a pipeline on recurring schedule based on elapsed time, or on data changes. For example, if you want to get a monthly customer churn score, you can create a time-based schedule to kick off your batch inference job every month.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Flexible design for a variety of workloads&lt;/H2&gt;
&lt;P&gt;ParallelRunStep is flexibly designed for a variety of workloads. It’s not just for batch inference, but also other workloads which necessitate parallel processing, e.g. training many models concurrently, or processing large amount of data.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;Get started with &lt;A href="https://azure.microsoft.com/en-us/free/ai/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt; for free today!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/azure/machine-learning/how-to-use-parallel-run-step" target="_blank" rel="noopener"&gt;How to use ParallelRunStep&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/batch-inference-notebooks" target="_blank" rel="noopener"&gt;Sample notebooks&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://channel9.msdn.com/Shows/AI-Show/How-to-do-Batch-Inference-using-AML-ParallelRunStep" target="_blank" rel="noopener"&gt;AI Show: How to do Batch Inference using AML ParallelRunStep&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 26 May 2020 16:49:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/batch-inference-in-azure-machine-learning/ba-p/1417010</guid>
      <dc:creator>tracych</dc:creator>
      <dc:date>2020-05-26T16:49:06Z</dc:date>
    </item>
    <item>
      <title>Build the best search experience in your application with new capabilities in Azure Cognitive Search</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-the-best-search-experience-in-your-application-with-new/ba-p/1403656</link>
      <description>&lt;P&gt;&lt;EM&gt;This blog has been authored by Vinod Kurpad (Principal PM, Azure Cognitive Search) and Prachi Jain (PMM, Azure AI)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_self"&gt;Azure Cognitive Search&lt;/A&gt; is a cloud service that enables developers with APIs and tools to build rich search experiences over a variety of content in web, mobile, and enterprise applications. This week at &lt;STRONG&gt;//build&lt;/STRONG&gt;, we announced new capabilities in Azure Cognitive Search making it easier for developers to build search experience and customize with new skills to deliver more relevant results to their users.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#3366ff"&gt;&lt;STRONG&gt;Improved Development Experience&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;Debug sessions is a new portal preview feature in Cognitive Search, that gives you a rich IDE like experience for refining a skillset and fixing issues within an AI enrichment pipeline, As visualized in Skills Graph, you can explore enrichments across all nodes within and enrichment tree and evaluate each skill invocation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Debug Sessions allow you to modify skills as well as inspect inputs and outputs of each step.&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorpracjain_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="debug session.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193592i07705EBEF3FDCA32/image-size/large?v=v2&amp;amp;px=999" role="button" title="debug session.png" alt="debug session.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2" color="#3366ff"&gt;“Debug session skill graph”&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Debug sessions help you with three kinds of issues:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Skillset issue:&lt;/STRONG&gt; expressions, paths and type mismatches within your skillset&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Skill failures:&lt;/STRONG&gt; Applies mostly to custom skills, with the ability to generate a request that you can debug locally&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data inconsistencies:&lt;/STRONG&gt; Handle scenarios where a specific document fails, for example if your data source now contains a document in a different language from what you had configured that another skill does not recognize.&lt;BR /&gt;Debug sessions also help you start small and incrementally build more complex skillsets.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This capability is now in preview and you can sign up for the &lt;A href="https://docs.microsoft.com/en-us/azure/search/whats-new" target="_self"&gt;preview here&lt;/A&gt;, watch to &lt;A href="https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Deep-Dive-with-Debug-Sessions/" target="_self"&gt;learn more&lt;/A&gt; and follow the &lt;A href="https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-debug-sessions" target="_self"&gt;step by step guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;FONT color="#3366ff"&gt;&lt;STRONG&gt;More intelligent than ever&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;We have continued to grow our catalog of skills, with new additions:&lt;/P&gt;
&lt;P&gt;• &lt;A href="https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-pii-detection" target="_self"&gt;PII skill&lt;/A&gt; identifies and redacts personally identifiable information, such as social security numbers, email addresses, credit card numbers, drivers’ licenses and many more entities. You can find the complete list of entities and languages supported &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/named-entity-types?tabs=personal" target="_self"&gt;here&lt;/A&gt;.&lt;BR /&gt;• &lt;A href="https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-text-translation" target="_self"&gt;Translation skill&lt;/A&gt; identifies the language of the text in the document and translate it into a target language. The translation skill supports a variety of languages ensuring coverage over most scenarios. &lt;BR /&gt;• &lt;A href="https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction" target="_self"&gt;Document extraction skill&lt;/A&gt; can inserted at any point within the skillset. In the past, document extraction was implicit in the beginning of the enrichment pipeline, now it can be configured within the pipeline. This enables scenarios like working with encrypted files.&lt;BR /&gt;• Azure Machine Learning Skill (&lt;FONT color="#000080"&gt;AML&lt;/FONT&gt;) makes discovering and consuming a model built within AML simple and intuitive. Endpoint discovery, authentication, and schema validation are some of the key benefits. &lt;BR /&gt;The AML skill is a preview feature that you can&lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0jK7x7HQYdDm__YfEsbtcZUMTFGTFVTOE5XMkVUMFlDVFBTTlYzSlpLTi4u" target="_self"&gt; sign up to use&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Now your Azure Machine Learning skills can be automatically detected when you edit your skills from the portal.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AML skillset.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193598i3FE2A986A56BC513/image-size/large?v=v2&amp;amp;px=999" role="button" title="AML skillset.png" alt="AML skillset.png" /&gt;&lt;/span&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;FONT size="2" color="#000080"&gt;“AML Skill : Adding the AML skill to skillset"&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#3366ff"&gt;&lt;STRONG&gt;More relevant search results&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;We are bringing new capabilities that help you deliver better search results&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The introduction of a new &lt;A href="https://en.wikipedia.org/wiki/Okapi_BM25" target="_self"&gt;BM25&lt;/A&gt; based ranking algorithm, that in our tests increased Normalized Discounted Cumulative Gain (NDCG) by about 5 points! This generates more intuitive results that align with user expectations. You can test this algorithm &lt;A href="https://docs.microsoft.com/en-us/azure/search/index-ranking-similarity" target="_self"&gt;today&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Mechanisms to provide more consistent results to users even in a world of shards and replicas that are constantly changing. We have introduced the ability to specify a query session (to reduce changes across the session) as well as the ability to request the scoring statistics to be computed based on global statistics (across shards). &lt;A href="https://docs.microsoft.com/en-us/azure/search/index-similarity-and-scoring#scoring-statistics" target="_self"&gt;Learn more about scoring statistics&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;For more granular control on ranking your search results, those of you who are more data science inclined, we have exposed some of the statistics computed for indexing purposes that you can use as an input into a “Learning to Rank” model that you create. You can then invoke this model to override the defaults to re-rank your search results. &lt;A href="https://docs.microsoft.com/en-us/rest/api/searchservice/search-documents" target="_self"&gt;Get index statistics as part of your query.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;FONT color="#3366ff"&gt;&lt;STRONG&gt;More secure than ever&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;Now that we have encryption for data in transit and at rest, our next security advancements are in the areas of endpoint protection and access control. The latest security features include support for accessing a search service over a private endpoint, limiting access to specific IP ranges, and limiting access to only clients in a virtual network. In addition, we are also&amp;nbsp;announcing preview support for AD managed identities. You can register a search service with Active Directory, and then grant read access to that identity from the Azure data sources you index from. Watch to &lt;A href="https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security" target="_self"&gt;learn more.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Now you can assign Azure Cognitive Search a managed identity that can be given “rights” to read a data source&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="secure.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193602i86B58F110B7356C5/image-size/large?v=v2&amp;amp;px=999" role="button" title="secure.png" alt="secure.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It is great to how customers like &lt;STRONG&gt;PwC&lt;/STRONG&gt;, have built a capability to automatically identify obligations for US consumer financial regulations on Azure Cognitive Search via their Regulatory Obligation Identifier Skill, saving significant manual effort for their customers searching for documents that contain regulation content. &lt;A href="https://customers.microsoft.com/en-us/story/811347-pwc-partner-professional-services-azure" target="_self"&gt;Learn more about their solution&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#3366ff"&gt;&lt;STRONG&gt;Get Started:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/search/" target="_blank" rel="noopener"&gt;https://azure.microsoft.com/en-us/services/search/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/search/" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/azure/search/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fresources%2Fa-developers-guide-to-building-ai-driven-knowledge-mining-solutions%2F&amp;amp;data=02%7C01%7CPrachi.Jain%40microsoft.com%7C0d35682c5662402f3a8308d80ca9a678%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637273270073286832&amp;amp;sdata=Ta9g6Es2R5sqLTPMQRHDng7YzlDD1H43bFzAmJ7f8JU%3D&amp;amp;reserved=0" target="_blank"&gt;https://azure.microsoft.com/en-us/resources/a-developers-guide-to-building-ai-driven-knowledge-mining-solutions/&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jun 2020 20:20:41 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-the-best-search-experience-in-your-application-with-new/ba-p/1403656</guid>
      <dc:creator>pracjain</dc:creator>
      <dc:date>2020-06-10T20:20:41Z</dc:date>
    </item>
    <item>
      <title>Build 2020 - Introducing Bot Framework Virtual Assistant 1.0</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-introducing-bot-framework-virtual-assistant-1-0/ba-p/1407833</link>
      <description>&lt;P&gt;Customers and partners have an increasing need to deliver advanced conversational assistant experiences tailored to their brand, personalized to their users, and made available across a broad range of canvases and devices. The &lt;A href="https://microsoft.github.io/botframework-solutions/index" target="_self"&gt;Virtual Assistant Solution Accelerator&lt;/A&gt; answers this need and, with v1.0 released at Build 2020, is now generally available!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The solution accelerator is open source in &lt;A href="https://github.com/Microsoft/botframework-solutions" target="_self"&gt;GitHub&lt;/A&gt; and provides you with a set of core foundational capabilities and full customization over the end user experience - including the name, voice, and personality of your assistant – whilst not sacrificing control over privacy and data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WeatherAndCalendar.gif" style="width: 645px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193497i78EE05BD2E0D07F3/image-size/large?v=v2&amp;amp;px=999" role="button" title="WeatherAndCalendar.gif" alt="WeatherAndCalendar.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can get started in minutes and extend rapidly, using pre-built reusable conversational Skills which cover common assistant use-cases, or develop your own skills using comprehensive end-to-end tooling such as Bot Framework Composer.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This article will provide a high-level overview of the Virtual Assistant Solution Accelerator, providing a good grasp of the key concepts. Over the coming weeks we will release additional articles, each providing a focused deep dive into each area.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Virtual Assistant Core&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;The Virtual Assistant Core is the foundation of your solution, built on top of the latest &lt;A href="https://github.com/microsoft/botframework-sdk" target="_self"&gt;Bot Framework SDK&lt;/A&gt; and integrated with Cognitive Services to provide the core assistant experience, such as &lt;A href="https://luis.ai" target="_self"&gt;Language Understanding&lt;/A&gt;, which is used for natural language understanding (NLU). Key features include.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Common dialog implementations&lt;/STRONG&gt; - for common assistant requirements, such as introduction, on-boarding experience, and handling situations where there is a need to hand off the conversation to a human. These base implementations include the language understanding models (.LU files) for recognizing user intents to trigger them (e.g. “I need to speak to a human”).&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;FAQ and Personality&lt;/STRONG&gt; - allowing the bot to answer user questions, from FAQs made available from a &lt;A href="https://qnamaker.ai" target="_self"&gt;QnA Maker&lt;/A&gt; knowledgebase, including taking advantage of the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/multiturn-conversation" target="_self"&gt;new multi-turn feature&lt;/A&gt;, in addition to making use of the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/chit-chat-knowledge-base" target="_self"&gt;Chit Chat personalities&lt;/A&gt; provided by the service, giving your assistant the ability to respond to common ‘small talk’, making it more engaging. Pre-built data sets are provided for professional, friendly, witty, caring and enthusiastic personalities and they are fully customisable.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Complex conversational capabilities, including interruption and context switching&lt;/STRONG&gt; – interacting via natural language can be complex, but the Virtual Assistant handles common scenarios with ease, such as the ability for a user to switch context or interrupt their conversation, such as to a different skill, escalating to a human, asking for contextual help, going back to an earlier step or cancelling their current flow completely.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-locale Language Generation (LG) support&lt;/STRONG&gt; – The solution takes advantage of LG files (also made generally available at Build), which allow for more natural and dynamic responses. Using LG, you can provide variations for each of your responses, meaning that a conversation doesn’t feel static to a user who is regularly interacting with the assistant. You are also able to access state and in-memory data, allowing you to customize responses based on context. LG files are available in English, Spanish, French, German, Italian and Chinese, with the ability to easily add additional locales if required. Multi-locale support also extends to Language Understanding (LU) assets.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Speech support&lt;/STRONG&gt; - Speech-first experiences can be enabled without any custom-code, responding to the evolving change in user behavior towards multi-modal experiences on a broad range of platforms and devices.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Telemetry, Logging and Analytics&lt;/STRONG&gt; - A &lt;A href="https://microsoft.github.io/botframework-solutions/solution-accelerators/tutorials/view-analytics/1-intro/" target="_self"&gt;telemetry pipeline for Virtual Assistant&lt;/A&gt;, leveraging both PowerBI and Azure Application Insights. This enables you to quickly and easily understand how your assistant is being used by users and gain actionable insights to make tangible improvements. Automated logging of transcripts can also be enabled, allowing for deeper analysis at a later date or passing conversation history to a human agent when handing off. An explicit mechanism is also available to ask users for their feedback when they complete a scenario using the assistant.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Skills&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;Several &lt;A href="https://microsoft.github.io/botframework-solutions/overview/skills/" target="_self"&gt;Skills&lt;/A&gt;, covering common assistant scenarios, are available to plug-in to your assistant immediately – rapidly increasing the capability of your solution without the need to expend custom development effort. However, as with the core, Skills are fully customisable and make use of the same assets (dialogs, LU and LG files), allowing you to easily tailor them to suit your specific requirements.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following Skills are currently available and they are pre-integrated with services such as the Microsoft Graph.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/skills/samples/calendar/" target="_self"&gt;Calendar&lt;/A&gt;&lt;/STRONG&gt; – providing calendar, meeting room booking and meeting management capabilities for users. E.g. “book a meeting with Darren tomorrow at 2pm at the Hyatt”. Using the Microsoft Graph, this skill is able to correctly search for an identify contacts without users needing to explicitly use their full name or email address.&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="eyGWNzvx8t.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193499i7111FF2F543A3D34/image-size/large?v=v2&amp;amp;px=999" role="button" title="eyGWNzvx8t.gif" alt="eyGWNzvx8t.gif" /&gt;&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/skills/samples/email/" target="_self"&gt;&lt;STRONG&gt;Email&lt;/STRONG&gt;&lt;/A&gt; – ability to compose, search, read, delete, and reply to email by interacting via natural language, connecting to an Office 365 or Google mailbox.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/skills/samples/to-do/" target="_self"&gt;&lt;STRONG&gt;To do&lt;/STRONG&gt;&lt;/A&gt; – provides task management capabilities to your assistant, allowing users to add, search, delete tasks and mark them as complete when they are done. As with the Calendar and Email Skills, Microsoft Graph integration is built in, bringing synchronization of tasks across platforms such as Microsoft ToDo, Planner and Outlook.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/skills/samples/point-of-interest/" target="_self"&gt;&lt;STRONG&gt;Point of Interest&lt;/STRONG&gt;&lt;/A&gt; – users can find points of interest and directions by taking advantage of the integration with Azure Maps and FourSquare.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following experimental / preview skills are also available.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/reference/skills/experimental/#hospitality-skill" target="_self"&gt;&lt;STRONG&gt;Hospitality&lt;/STRONG&gt;&lt;/A&gt; – allowing for experiences such as managing reservations, check out, and amenity requests.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/reference/skills/experimental/#it-service-management-skill" target="_self"&gt;&lt;STRONG&gt;IT Service Management (ITSM)&lt;/STRONG&gt;&lt;/A&gt; – provides a basic skill that provides ticket and knowledge base related capabilities, with support for ServiceNow built-in.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/reference/skills/experimental/#music-skill" target="_self"&gt;&lt;STRONG&gt;Music&lt;/STRONG&gt; &lt;/A&gt;– Features artists and playlist lookup for the popular music service Spotify. Playback information is then signalled back to the device through Events enabling native device playback.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Channels and Clients&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;It is crucial that you surface your assistant on the channels being used by your users, meeting them where they are. Via &lt;A href="https://azure.microsoft.com/en-gb/services/bot-service/" target="_self"&gt;Azure Bot Service&lt;/A&gt; (ABS), you can &lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-manage-channels?view=azure-bot-service-4.0" target="_self"&gt;connect your assistant to any channel&lt;/A&gt; currently supported by that service, including Microsoft Teams, web chat, Facebook Messenger, Slack and the new preview channel for Alexa Skills. Beyond the channels currently supported by ABS, you can also take advantage of available Bot Framework adapter implementations, allowing your assistant to accept request directly from other platforms such as WhatsApp, RingCentral, Google Assistant and Zoom.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We also recognize the need to be able to have devices such as phones, tablets, and other IOT devices (e.g. Cars, Alarm Clocks, etc.) as interfaces to interact with their users. To simplify this, a &lt;A href="https://microsoft.github.io/botframework-solutions/clients-and-channels/clients/virtual-assistant-client/" target="_self"&gt;base Android application is available&lt;/A&gt;, including the following capabilities:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Can be set as default assistant on the device&lt;/LI&gt;
&lt;LI&gt;Speech support via the Direct Line Speech service, including the ability to open and close the mic on the device&lt;/LI&gt;
&lt;LI&gt;Ability to render Adaptive Cards&lt;/LI&gt;
&lt;LI&gt;Consume events and engage with events from the local Android OS (navigation, phone dialer, etc.)&lt;/LI&gt;
&lt;LI&gt;UI supporting threaded conversation&lt;/LI&gt;
&lt;LI&gt;Light and dark mode support and easy customization of colors&lt;/LI&gt;
&lt;LI&gt;Much more…&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Assistant Samples&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;As we continue to grow our Virtual Assistant capabilities, as well as providing the ability to start with just the Core component, which does incorporate any pre-configured skills, we have seen the value in providing sample implementations for specific verticals, combining appropriate skills and channels to further accelerate the development of assistants within those industries.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The following assistant samples are currently available.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/solution-accelerators/assistants/enterprise-assistant/" target="_self"&gt;&lt;STRONG&gt;Enterprise Assistant&lt;/STRONG&gt;&lt;/A&gt; - Microsoft has assembled a typical configuration of a Virtual Assistant that covers scenarios often required by our Enterprise customers implementing internal facing productivity assistants. This sample provides an implementation of a Virtual Assistant that includes pre-configured capabilities such as weather, news, calendar, to do, and ITSM. Single sign on support (SSO) for Azure Active Directory is also included.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/solution-accelerators/assistants/hospitality-assistant/" target="_self"&gt;&lt;STRONG&gt;Hospitality Assistant&lt;/STRONG&gt; &lt;/A&gt;- A typical configuration of a Virtual Assistant that is targeted at the hospitality industry. This sample will provide an implementation of a Virtual Assistant that includes actions such as event information, POI finding, weather, news, hospitality (via the hospitality skill detailed above) etc.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Getting started and documentation&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;We have overhauled the &lt;A href="https://microsoft.github.io/botframework-solutions/index" target="_self"&gt;documentation for Virtual Assistant&lt;/A&gt;, with a dedicated site, making it even easier to find information about the Virtual Assistant, its capabilities, customization and deployment. This site will always contain up to date information regarding the latest version of the solution, including steps you can take to migrate to the new releases, ensuring your assistant remains up to date.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To get started today building your own Virtual Assistant, you can find dedicated articles, for both &lt;A href="https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/1-intro/" target="_self"&gt;C#&lt;/A&gt; and &lt;A href="https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/typescript/1-intro/" target="_self"&gt;TypeScript&lt;/A&gt;, detailing the steps you need to take to get up and running within minutes, with end-to-end scripts to configure your assistant and deploy all of the required Azure resources.&lt;BR /&gt;We also provide sample implementations for both continuous integration (CI) and continuous deployment (CD) scenarios within Azure DevOps for both C# and TypeScript.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looking to the future, now that v1.0 is generally available, we are focused on providing the ability for you to take advantage of the other significant capabilities within the Bot Framework eco-system at Build 2020, such as the ability to develop a virtual assistant (and its connected Skills) using Bot Framework Composer. A preview demonstrating Composer integration and the improved developer experience that comes with taking advantage of the new declarative dialog model that underpins it, can be found at &lt;A href="https://aka.ms/bfskillsbuildpreview" target="_blank" rel="noopener"&gt;https://aka.ms/bfskillsbuildpreview&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited about hitting this milestone and can’t wait to see the solutions you build, to improve the experiences of your customers!&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 09:14:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-introducing-bot-framework-virtual-assistant-1-0/ba-p/1407833</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-05-21T09:14:12Z</dc:date>
    </item>
    <item>
      <title>Introducing Reinforcement Learning on Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-reinforcement-learning-on-azure-machine-learning/ba-p/1403028</link>
      <description>&lt;P&gt;We are excited to announce the preview of &lt;A href="https://aka.ms/amlrl-doc" target="_blank" rel="noopener"&gt;Reinforcement Learning on Azure Machine Learning&lt;/A&gt;.&amp;nbsp; &amp;nbsp;Reinforcement learning is an approach to machine learning to train agents to make a sequence of decisions. &amp;nbsp;This technique has gained popularity over the last few years as breakthroughs have been made to teach reinforcement learning agents to excel at complex tasks like playing video games. &amp;nbsp;There are many practical real-world use cases as well, including robotics, chemistry, online recommendations, advertising and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;What is reinforcement learning?&lt;/H2&gt;
&lt;P&gt;In reinforcement learning, the goal is to train an agent &lt;EM&gt;policy&lt;/EM&gt; that outputs actions based on the agent’s observations of its environment.&amp;nbsp; Actions result in further observations and &lt;EM&gt;rewards&lt;/EM&gt; for taking the actions.&amp;nbsp; In reinforcement learning, the full reward for policy actions may take many steps to obtain.&amp;nbsp; Learning a policy involves many trial-and-error runs of the agent interacting with the environment and improving its policy.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The new reinforcement learning support in Azure Machine Learning service enables data scientists to scale training to many powerful CPU or GPU enabled VMs using &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener"&gt;Azure Machine Learning &amp;nbsp;compute clusters&lt;/A&gt; which automatically provision, manage, and scale down these VMs to help manage your costs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;​Learning reinforcement learning with Minecraft&lt;/H2&gt;
&lt;P&gt;We use reinforcement learning in many ways at Microsoft to improve our products and services. &amp;nbsp;For example, Office uses reinforcement learning to improve the suggestions it makes to users in its apps.&amp;nbsp; To help you get started with reinforcement learning you should check out &lt;A href="https://aka.ms/azureml-rl-notebooks" target="_blank" rel="noopener"&gt;sample notebooks&lt;/A&gt; to train an agent to navigate a lava maze in Minecraft using Azure Machine Learning.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The agent’s goal is to navigate a maze and get to the blue exit tile by walking along solid tiles.&amp;nbsp; If the agent wanders off the solid tiles, it falls into lava and must start over again.&amp;nbsp; Each maze map is randomly generated so the agent must learn to generalize to handle different conditions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Agent-Maze.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193076iEEF90AADB8F0076D/image-size/large?v=v2&amp;amp;px=999" role="button" title="Agent-Maze.png" alt="Agent-Maze.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The first step for a data scientist is to develop the training script.&amp;nbsp; The training script for the Minecraft sample is on &lt;A href="https://aka.ms/aml-minecraft-script" target="_blank" rel="noopener"&gt;Github&lt;/A&gt;.&amp;nbsp; A typical experience involves iterative development using a combination of local or cloud hosted notebooks, and development tools such as Visual Studio Code or PyCharm.&amp;nbsp; Azure Machine Learning Compute Instance is a cloud hosted Jupyter notebook server that enables rapid iteration using cloud resources.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once a data scientist creates a Python training script, expanding training to multiple nodes is simple.&amp;nbsp; After creating compute clusters in Azure Machine Learning Studio UI or by using Python SDK calls, the data scientist submits an agent training job using the Azure Machine Learning ReinforcementLearningEstimator.&amp;nbsp;&amp;nbsp; The following example sets up a training configuration to run Minecraft on 8 worker compute nodes to collect training data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;worker_config = WorkerConfiguration(
    compute_target=cpu_cluster, 
    node_count=8,
    environment=cpu_minecraft_environment)

estimator = ReinforcementLearningEstimator(
    source_directory='files',
    entry_script='minecraft_train.py',
    compute_target=gpu_cluster,
    environment=gpu_minecraft_environment,
    worker_configuration=worker_config,
    max_run_duration_seconds=2 * 60 * 60,
    shm_size=1024 * 1024 * 1024 * 30)

run = experiment.submit(estimator)
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning automatically allocates compute nodes in the compute target, loads them with container images containing Minecraft and simulation code, and starts running the training script.&amp;nbsp; After training completes, compute nodes automatically deallocate based on user policy to avoid incurring extra charges.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Data scientists track progress of training with multiple methods, including Tensorboard, within the Jupyter notebook, and on Azure Machine Learning Studio.&amp;nbsp; Here we show how the training reward increases over time in Azure Machine Learning Studio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="run-history.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193080i4CBFBB5BED518A8A/image-size/large?v=v2&amp;amp;px=999" role="button" title="run-history.png" alt="run-history.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After training is completed, the agent can be evaluated to see how well it performs.&amp;nbsp; In the animation below, the agent is seen successfully navigating the maze!&amp;nbsp; Training this agent takes around 90 minutes using the configuration in the Minecraft code sample.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="lava_maze_minecraft.gif" style="width: 640px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/193081i3200E7CA57219453/image-size/large?v=v2&amp;amp;px=999" role="button" title="lava_maze_minecraft.gif" alt="lava_maze_minecraft.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Training Agents on Azure Machine Learning&lt;/H2&gt;
&lt;P&gt;Azure Machine Learning customers are applying Reinforcement Learning on Azure Machine Learning to industrial and other applications.&amp;nbsp; We are seeing Azure Machine Learning customers train reinforcement learning agents on up to 512 cores or running their training over multiple days.&amp;nbsp; In practice, it can take millions of trial runs to train an agent.&amp;nbsp; These trial runs happen automatically, rapidly in parallel and the system continuously learns and improves.&amp;nbsp; Azure Machine Learning uses the &lt;A href="https://ray.io" target="_blank" rel="noopener"&gt;Ray&lt;/A&gt; framework to distribute reinforcement learning training to support large scale training&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To train agents on Azure Machine Learning, data scientists use standard machine learning tools including the Azure Machine Learning Python SDK, the Azure Machine Learning Studio UI to monitor and manage progress, and the command line interface. Azure Machine Learning simplifies running reinforcement learning on remote compute clusters, including tracking experiment results in Tensorboard and Azure Machine Learning Studio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started with reinforcement learning&lt;/H2&gt;
&lt;P&gt;Learn more about &lt;A href="https://aka.ms/azure-machine-learning-service/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt; and &lt;A href="https://aka.ms/azureml-free-trial" target="_blank" rel="noopener"&gt;get started with a free trial&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can train your own reinforcement learning agents using Azure Machine Learning using the following resources.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/amlrl-doc" target="_blank" rel="noopener"&gt;How to: reinforcement learning with Azure Machine Learning&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/azureml-rl-notebooks" target="_blank" rel="noopener"&gt;Github samples&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/azureml-rl-aishow" target="_blank" rel="noopener"&gt;AI Show: introductory video&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://mybuild.microsoft.com/sessions/6ca95fbf-9164-4143-8159-cc88daba1d7c?source=schedule" target="_blank" rel="noopener"&gt;Video interview&lt;/A&gt; with Katja Hofmann, Director of Game Intelligence Group, Microsoft Research&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We look forward to hearing your feedback!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 20 May 2020 04:28:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-reinforcement-learning-on-azure-machine-learning/ba-p/1403028</guid>
      <dc:creator>keijik</dc:creator>
      <dc:date>2020-05-20T04:28:00Z</dc:date>
    </item>
    <item>
      <title>Bringing IntelliSense, collaboration and more to Jupyter notebooks with Azure Machine Learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009</link>
      <description>&lt;P&gt;&lt;EM&gt;This post is co-authored by Maxim Lukiyanov, Principal PM Manager, Azure Machine Learning.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we are pleased to announce the release of an enhanced notebook editor in the Azure Machine Learning Studio. This update enables members of Azure ML workspace to edit, share and collaborate on the notebooks in the same environment that contains their ML experiments, metrics, models, datasets, and more. With the new Studio Notebooks, Data Scientists are one link away from collaboration. We believe this simplicity of sharing will be especially welcome in today’s world of mandatory remote work. The new notebook editor is based on open source &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnteract%2Fnteract&amp;amp;data=02%7C01%7Cmaxluk%40exchange.microsoft.com%7C7b1961e86d0a457708aa08d7ee127cac%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637239635954134836&amp;amp;sdata=bQJUM11Z6SOrk79YhJ1GxDVYdj6nbgh43zw7vv8qrp0%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;nteract&lt;/A&gt; project and provides full compatibility with standard Jupyter. The editor also brings some best-in-class code editor features from VS Code that our customers know and love. For the first time, Data Scientists can use advanced features like full&amp;nbsp;&lt;A href="https://code.visualstudio.com/docs/editor/intellisense" target="_blank" rel="noopener"&gt;IntelliSense&lt;/A&gt; and inline error highlighting directly in their Jupyter notebooks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="lia-teaserTinyMceEditorabeomor_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_3-1589322714796.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191125iE9B022665D09A882/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_3-1589322714796.png" alt="abeomor_3-1589322714796.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorabeomor_9" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;Here’s how to get started using the &lt;A href="https://aka.ms/studionotebooks" target="_blank" rel="noopener"&gt;new Studio Notebooks experience&lt;/A&gt;. Azure Machine Learning workspace is a one-stop-shop for all my Machine Learning needs. &lt;BR /&gt;In this workspace, users can easily share all my machine learning assets with teammates. In the notebook experience, users can browse all their files and the files of others on my team. Making it extremely simple to collaborate. Users can start working with a Jupyter Notebook directly in my workspace and have easy access to any of my Experiment details, datasets, models and more.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Getting Started with Studio Notebooks Public Preview.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191126i12EB9E8BF0A73D72/image-size/large?v=v2&amp;amp;px=999" role="button" title="Getting Started with Studio Notebooks Public Preview.gif" alt="Getting Started with Studio Notebooks Public Preview.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Live Collaboration Experiences (&lt;EM&gt;Coming Soon&lt;/EM&gt;)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Currently, there are not many ways for data scientists to share notebooks and experiments with their team members in a secure fashion. Even when sharing is successful, it is still difficult to collaborate with others and ensure an experiment will work seamlessly with another user’s computer setup. The whole process can be cumbersome. Data scientists should have access to a tool that can make the authoring and collaboration experiences as easy as possible while ensuring the use of a compute that supports the needs for a specific experiment.&lt;/P&gt;
&lt;P&gt;Coming soon, users will be able to collaborate instantly with colleagues using &lt;A href="https://support.microsoft.com/en-ie/office/get-started-with-fluid-framework-preview-d05278db-b82b-4d1f-8523-cf0c9c2fb2df?ui=en-us&amp;amp;rs=en-ie&amp;amp;ad=ie" target="_blank" rel="noopener"&gt;Microsoft Office’s Fluid Framework&lt;/A&gt; for seamless a co-editing experience. The Fluid Framework is the same technology Office uses for collaborations, so users can expect similar collaboration functionality and feature as the Microsoft Office Suite. Users will be able to pair debug and unblock teammates easier by being able to live co-edit a notebook.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_2-1588726697181.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/189360iD43CD627C2D3A62C/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_2-1588726697181.png" alt="abeomor_2-1588726697181.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Built-In Notebook IntelliSense&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;and Improved Editor&amp;nbsp; &lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;When a Data scientist is writing code cell in a notebook, most code is usually error-prone due to typos, syntax errors, or using the wrong function name. Most of these issues are due to the lack of common code editor features like code suggestion or syntax highlighting. &lt;BR /&gt;With Studio Notebooks, users will get a code editor experience in a Notebook, every code cell is powered by VSCode’s &lt;A href="https://github.com/microsoft/monaco-editor" target="_blank" rel="noopener"&gt;Monaco editor&lt;/A&gt;. When a user starting writing in a code cell, they can use first-in-class code editor features such as IntelliSense, inline error highlighting and code suggestions, variable highlighting, multi-line select, and more. These features that will help boost productivity for anyone typing code, and are even more impactful within a notebook canvas, fitting well into the user’s fast-paced and iterative workflow.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_2-1589322689974.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191124i3C9E6E32B80A3117/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_2-1589322689974.png" alt="abeomor_2-1589322689974.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Inline Compute Controls and Notebook Kernel Switching&lt;/H2&gt;
&lt;P&gt;Occasionally when training a model, a data scientist might need more powerful computing resources or might need to write some code in a language other than Python. Changing compute or installing a need kernel can sometimes be a time-intensive process. With Studio Notebooks this process just got much more efficient. Users can easier control their Azure Machine Learning Compute resource directly from the notebooks. The Notebook toolbar has inline controls to start, stop and create a new compute. Each compute in the dropdown also has added details for all created computes. Backed by the power of the Azure Cloud, users can easily spin up a new GPU or CPU Compute Instance that meets your computing needs, right inside the notebooks.&lt;/P&gt;
&lt;P&gt;While using a notebook, users will also be able to add new kernels to the notebook editor and quickly switch between different kernels (Python and R kernel are available by default). Learn more about adding new kernels &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks#add-new-kernels" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_1-1589322670845.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191123i40A6617C98F9CC74/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_1-1589322670845.png" alt="abeomor_1-1589322670845.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Automated Notebook Cleanup with Gather &lt;EM&gt;(Coming Soon)&lt;/EM&gt;&lt;/H2&gt;
&lt;P&gt;When experimenting and prototyping in a notebook, it can often become busy as a user explores different approaches. After eventually reaching the desired result a user would then need to manually curate the cells involved with this specific flow. This task can be laborious and error-prone, leaving users without a strong approach for aggregating related cells. With the &lt;A href="https://github.com/microsoft/gather" target="_blank" rel="noopener"&gt;Gather&lt;/A&gt;&amp;nbsp;feature, users can now easily clean up notebooks with, Gather uses an automated dependency analysis of your notebook, ensuring the essential code is kept, but removing any irrelevant pieces.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_0-1589322651059.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191122i95FD2308C2E89641/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_0-1589322651059.png" alt="abeomor_0-1589322651059.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;R Support&lt;/H2&gt;
&lt;P&gt;R is an extremely popular language in the Data science community and many Data Scientist use R in their experiment workflow. With this new notebook editor, the R language is supported by default in any notebook via the R Kernel. You can easily switch to the R Kernel and run R code directly in any notebook.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_11-1588726780586.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/189371iACFB344D98AB5A5E/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_11-1588726780586.png" alt="abeomor_11-1588726780586.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;iPyWidget Support&lt;/H2&gt;
&lt;P&gt;Often data scientist uses widgets to communicate concepts and ideas more effectively in Jupyter notebooks. In the Studio Notebook experience users can create richer notebooks with iPywidgets. The Studio Notebook fully supports all standard &lt;A href="https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html" target="_blank" rel="noopener"&gt;ipywidgets&lt;/A&gt;. You can now make more captivating and interactive notebooks directly in the Azure Machine Learning Studio.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="abeomor_10-1588726772246.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/189370iBF9A373B2E9F2723/image-size/large?v=v2&amp;amp;px=999" role="button" title="abeomor_10-1588726772246.png" alt="abeomor_10-1588726772246.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H2&gt;Try it out yourself&lt;/H2&gt;
&lt;P&gt;Click ‘Notebooks’ in the Azure Machine Learning Studio and start building&amp;nbsp;Jupyter&amp;nbsp;notebooks with minimum setup. Most popular machine learning packages and the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py" target="_blank" rel="noopener"&gt;Azure Machine Learning Python SDK&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;come pre-configured with any attached compute instance.&amp;nbsp;You can check out the &lt;A href="https://aka.ms/studionotebooks" target="_self"&gt;documentation&lt;/A&gt;&amp;nbsp;and &lt;A href="https://www.youtube.com/watch?v=AAj-Fz0uCNk&amp;amp;feature=youtu.be" target="_self"&gt;video&lt;/A&gt; for more tips on how to use Studio Notebooks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited for everyone to try out this new Studio Notebooks experience and become more productive creating ML experiments with the Azure Machine Learning Studio! We would also love to hear about your experience on the new and upgraded version so please send us your &lt;A href="https://aka.ms/nbcomponentsurvey" target="_self"&gt;feedback&lt;/A&gt;!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=AAj-Fz0uCNk" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/AAj-Fz0uCNk/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 03:37:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009</guid>
      <dc:creator>abeomor</dc:creator>
      <dc:date>2020-05-21T03:37:31Z</dc:date>
    </item>
    <item>
      <title>Build predictive maintenance, conversational user interface and powerful analytics at the edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-predictive-maintenance-conversational-user-interface-and/ba-p/1400179</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;With Azure Cognitive Services in containers&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;businesses across industries have unlocked new productivity gains and insights. Organizations ranging from manufacturing, legal to financial services have transformed their processes and customer experiences as a result.&amp;nbsp;&lt;/SPAN&gt;Azure is the only cloud provider supporting customers giving full flexibility to run AI in their own terms (on-prem, cloud or at the edge).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In general, developers need to have right data science skills to build, train and run models, but Cognitive Services&lt;SPAN&gt; brings AI within reach of every developer with pre-built&lt;/SPAN&gt; and &lt;SPAN&gt;customizable AI that can be embedded into apps with the programming languages &lt;/SPAN&gt;of your own choice. With AI container support, we took one step ahead empowering developers to deploy hybrid solutions. This solves current challenges with data control, privacy, and network intensive workloads.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The goal of this post is to show several examples of how enterprise applications leverage containers to solve AI needs at the edge.&amp;nbsp;Learn more about how customers built solutions using Cognitive Services Containers&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_0-1589829649893.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192811i643B6EA76C65C123/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_0-1589829649893.png" alt="phanim_0-1589829649893.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Use case #1 – &lt;/STRONG&gt;&lt;STRONG&gt;Anomaly detection in factory equipment using predictive maintenance solution&lt;/STRONG&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;A href="https://www.tibco.com/" target="_blank" rel="noopener"&gt;Tibco&lt;/A&gt;&amp;nbsp;-&amp;nbsp;&lt;/STRONG&gt;TIBCO, a strategic partner of Microsoft, has a market offering that incorporates two containerized Cognitive Services into their Anomaly Detection solution.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TIBCO unlocks the potential of data assets for making faster, smarter decisions with their Connected Intelligence Platform. The platform allows you to seamlessly &lt;EM&gt;connect&lt;/EM&gt; to any application or data source; &lt;EM&gt;unify&lt;/EM&gt; data for greater access, trust, and control; and confidently &lt;EM&gt;predict&lt;/EM&gt; outcomes at scale in real-time.&lt;/P&gt;
&lt;P&gt;The TIBCO anomaly detection solution includes Azure Cognitive Services container deployment with text mining and root cause analysis. Anomaly detection and analysis provides value across nearly every industry including energy, financial fraud and risk, algorithmic insurance, connected vehicles, healthcare and insurance claims, and manufacturing fault detection and yield optimization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This blog focuses on manufacturing, identifying anomalies for asset management. This specific example uses machine learning techniques to detect anomalies, understand root cause from related text data, and alert case managers when sensor readings are deviating from expected patterns. This enables operators to implement condition-based maintenance interventions before equipment failure, and to prevent costly manufacturing process shutdown.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Reference Architecture:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_1-1589827117610.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192784iC8271C691F4FDAEF/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_1-1589827117610.png" alt="phanim_1-1589827117610.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Solution overview:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Manufacturing, energy, mining and power plant customers use TIBCO Data Science to detect anomalies on historical data from various facilities at scale. This TIBCO Data Science workflow produces a portable analytics model object that the maintenance engineer can call remotely, and also bring to the engineering site and invoke using TIBCO Spotfire locally.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TIBCO Spotfire calls the Azure Containerized Cognitive services using a Spotfire data function via a python API. As the maintenance engineer detects anomalies onsite, Spotfire’s brush-linked visual analytics and data science tooling runs a root cause anomaly analysis, After the anomalies are detected, root cause factors are identified for a specific time window as part of the analysis. These analyses are combined with results from the Text Analytics containerized service using key phrase extraction to determine the recommended actions to be taken.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Let’s see how this solution built in few steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 1 - Anomaly Detection on historical data&lt;/STRONG&gt;: Here is the TIBCO Data Science platform which is used for detecting anomalies across all the manufacturing sites.&lt;/P&gt;
&lt;P&gt;What you see in the image below is power plant data. The workflow consists of multiple steps including data pre-processing, transformation of time-series data, filtering and ultimately calling the MS Azure services from a TIBCO Data Science operator designed to invoke the services.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_2-1589827117635.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192785i84424D5C470A5C2A/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_2-1589827117635.png" alt="phanim_2-1589827117635.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 2 – Detecting Anomalies at remote site&lt;/STRONG&gt;&lt;STRONG&gt;:&lt;/STRONG&gt; Once the maintenance engineer is at a remote location that may not be connected to the internet, the first step is to perform anomaly detection analysis using the Spotfire equipment maintenance dashboard by selecting different features as shown below.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_3-1589827117652.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192786i06E8572FD3F92522/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_3-1589827117652.png" alt="phanim_3-1589827117652.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The user selects a response metric and granularity, and the anomalies are detected based on historical data collected at the particular site. As shown in the image below, the upper section shows the original data readings along with the expected values provided from the containerized service, and the bottom section shows the difference between the original and expected values over time. The red markers are anomalies detected by the Anomaly Detection routine.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_4-1589827117738.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192788iB6762833C22C07AA/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_4-1589827117738.png" alt="phanim_4-1589827117738.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Phase 3 – Performing root cause analysis&lt;/STRONG&gt;&lt;STRONG&gt;: &lt;/STRONG&gt;The site engineer investigates anomalies from a certain time window to perform a root cause analysis. On the image below (bottom left portion), key driving factors indicate what factors are contributing to the anomalies in production per minute for the time window selected by an engineer using the TIBCO Spotfire analytics application.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_5-1589827117795.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192787i5391ABEBE0A550F5/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_5-1589827117795.png" alt="phanim_5-1589827117795.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;At this point, the site engineer digs deeper to understand the maintenance action items performed prior to these anomalies occurring by getting insights from log data using key phrase extraction from the Text Analytics container as indicated in the image below in the top section (Get Insights).&lt;/P&gt;
&lt;P&gt;From here, TIBCO Spotfire generates a recommendation for next steps. In the example shown, the&lt;EM&gt; ‘BaroPressure’&lt;/EM&gt; sensor was restarted 5 times vs. being recommended for replacement just once. So, the equipment has to be replaced to avoid any further shutdowns.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_6-1589827117919.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192790iB90DE5AB19B86DA0/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_6-1589827117919.png" alt="phanim_6-1589827117919.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In summary, initially detecting anomalies in power plant data across all sites on historical data using TIBCO Data Science. This produced a portable analytics model object that the maintenance engineer can take to the remote site. At the site, all the recommended actions by leveraging containerized cognitive services from Azure &amp;nbsp;combined with decision intelligence embedded in TIBCO Spotfire using root cause analysis.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This use case is just an illustrative example and one of the many ways how customer’s use TIBCO’s analytics and data science capabilities to solve some of the most complex business problems such as pricing and promotion optimization, supply chain analytics, fraud detection, yield enhancement, and real-time process control.&lt;/P&gt;
&lt;P&gt;To learn more about TIBCO’s anomaly detection solution, visit &lt;A href="https://www.tibco.com/solutions/anomaly-detection" target="_blank" rel="noopener"&gt;Anomaly Detection at TIBCO&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Use case #2 – Organizing unstructured data and surface insight in Power BI dashboard&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://wilsonallen.com/" target="_blank" rel="noopener"&gt;Wilson Allen&lt;/A&gt;&amp;nbsp;&lt;/STRONG&gt;is a systems integrator with deep analytic experience serving the legal. Given the confidentiality of the work involved, the data must stay local. Law firms use Wilson Allen’s Proforma Tracker to manage their pre-bill processes.&amp;nbsp; Wilson is using cognitive services to recognize text, optimize legal templates and process them for billing, translate and ensure they are in compliance with regulatory laws as well as enabling editing via hand writing and capturing this digitally.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Architecture-wise, Wilson Allen is using Azure’s AI in the form of containerized Cognitive Services to bring form-based data into firm’s data ecosystem and to create meta data from text-based data, using ML techniques to establish patterns, causality relationships, and to render it all in a common data model without the traditional data warehouse visualized through Power BI reporting.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using Form Recognizer, they are also extracting historic sets of HR PDF forms (intake, change, termination) to create a database for master data and analytic purposes. &amp;nbsp;Additionally, they are using Sentiment Analysis for marrying structured and unstructured client feedback data then surfaced in Power BI.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Containers used&lt;/STRONG&gt;: Form Recognizer, Text Analytics (Sentiment Analysis)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_7-1589827117935.jpeg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192791i31FE3C682020D90D/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_7-1589827117935.jpeg" alt="phanim_7-1589827117935.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Use case #3 – Build conversational user interface to help retailers manage inventory better, help bankers improve support experience, provide healthcare customer with better guidance in manage appointment and track efficiency.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;A href="https://www.insight.com/en_US/home.html" target="_blank" rel="noopener"&gt;Insight&lt;/A&gt;&amp;nbsp;-&amp;nbsp;&lt;/STRONG&gt;Insight’s Kiosk Framework leverages the Cognitive Services containers to create a functional experience for the customer with a minimal connectivity requirement. Using the Azure IoT Edge runtime and IoT hub, the framework can be managed remotely and securely allowing the customer to change the “personality” of the kiosk remotely based on the business requirements and constraints around image capture, responses and content. It also allows the customer to manage the deployment and management of this framework at scale across locations.&lt;/P&gt;
&lt;P&gt;Containers used: Face Recognition, Speech to Text, Language Understanding&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_8-1589827117948.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192789i69D5ED0954AAE8DA/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_8-1589827117948.png" alt="phanim_8-1589827117948.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get Started...! Learn more and deploy AI at the edge with Cognitive Services &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Deploying your first container is about a 2-minute read. You basically create a resource at Azure portal, download image, run container with environmental variables. Here's a&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/blog/running-cognitive-service-containers/" target="_self"&gt;guide&lt;/A&gt;&amp;nbsp;help you get started on running containers.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best Practices&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Let’s look at some best practices that might be helpful for you to run containers:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Container security&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Security should be a primary focus whenever you're developing applications. The importance of security is a metric for success. When you're architecting a software solution that includes Cognitive Services containers, it's vital to understand the limitations and capabilities available to you. For more information about network security, see&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-virtual-networks" target="_blank" rel="noopener"&gt;Configure Azure Cognitive Services virtual networks&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;The diagram below illustrates the default and&amp;nbsp;&lt;STRONG&gt;non-secure&lt;/STRONG&gt;&amp;nbsp;approach:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="phanim_9-1589827117955.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192792i368E65C2FAB6F9E2/image-size/large?v=v2&amp;amp;px=999" role="button" title="phanim_9-1589827117955.png" alt="phanim_9-1589827117955.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As an alternative and&amp;nbsp;&lt;EM&gt;secure&lt;/EM&gt;&amp;nbsp;approach, consumers of Cognitive Services containers could augment a container with a front-facing component, keeping the container endpoint private. Let's consider a scenario where we use&amp;nbsp;&lt;A href="https://istio.io/" target="_blank" rel="noopener"&gt;Istio&lt;/A&gt;&amp;nbsp;as an ingress gateway. Istio supports HTTPS/TLS and client-certificate authentication. In this scenario, the Istio front end exposes the container access, presenting the client certificate that is whitelisted beforehand with Istio.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nginx.com/" target="_blank" rel="noopener"&gt;Nginx&lt;/A&gt;&amp;nbsp;is another popular choice in the same category. Both Istio and Nginx act as a service mesh and offer additional features including things like load-balancing, routing, and rate-control.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Container networking&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Cognitive Services containers are required to submit metering information for billing purposes. Failure to allow list various network channels that the Cognitive Services containers rely on will prevent the container from working.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Allow list Cognitive Services domains and ports&lt;/H4&gt;
&lt;P&gt;The host should allow list&amp;nbsp;&lt;STRONG&gt;port 443&lt;/STRONG&gt;&amp;nbsp;and the following domains:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;*.cognitive.microsoft.com&lt;/LI&gt;
&lt;LI&gt;*.cognitiveservices.azure.com&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;BR /&gt;Disable deep packet inspection&lt;/H4&gt;
&lt;P&gt;&lt;A href="https://en.wikipedia.org/wiki/Deep_packet_inspection" target="_blank" rel="noopener"&gt;Deep packet inspection&lt;/A&gt;&amp;nbsp;(DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and usually takes action by blocking, re-routing, or logging it accordingly.&lt;/P&gt;
&lt;P&gt;Disable DPI on the secure channels that the Cognitive Services containers create to Microsoft servers. Failure to do so will prevent the container from functioning correctly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started today by going to&amp;nbsp;Azure Cognitive Services to build&amp;nbsp;intelligent applications&amp;nbsp;that span&amp;nbsp;Azure&amp;nbsp;and&amp;nbsp;the&amp;nbsp;edge. For more information, please refer to container&amp;nbsp;&lt;A href="https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcognitive-services%2Fcognitive-services-container-support&amp;amp;data=02%7C01%7CPhani.Mutyala%40microsoft.com%7C07241fcfd5794265cb0508d64a3bb999%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636778017700446624&amp;amp;sdata=Y91ASWqycPFVRZIqpvRQrl6HgVYbB9lmp5MOmUQ5X48%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;&amp;nbsp;page.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 May 2020 15:00:03 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-predictive-maintenance-conversational-user-interface-and/ba-p/1400179</guid>
      <dc:creator>Phani_Mutyala</dc:creator>
      <dc:date>2020-05-19T15:00:03Z</dc:date>
    </item>
    <item>
      <title>ONNX Runtime Training Technical Deep Dive</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/onnx-runtime-training-technical-deep-dive/ba-p/1398310</link>
      <description>&lt;P&gt;&lt;EM&gt;Author: Sherlock Huang, AI Frameworks, Microsoft&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM style="font-family: inherit;"&gt;This post is co-authored by Cheng Tang,&amp;nbsp;Jesse Benson,&amp;nbsp;&lt;SPAN&gt;Kaarthik Sivashanmugam and Alexey Svyatkovskiy&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;Today we &lt;/SPAN&gt;&lt;SPAN&gt;&lt;A href="https://aka.ms/ort-build2020" target="_blank" rel="noopener"&gt;announced&lt;/A&gt;&lt;/SPAN&gt; &lt;SPAN&gt;the preview for new training&lt;/SPAN&gt;&lt;SPAN&gt; feature in ONNX Runtime&lt;/SPAN&gt; (ORT). This blog explains how we have been using it&lt;SPAN&gt; to accel&lt;/SPAN&gt;&lt;SPAN&gt;erate training for large transformer models. &lt;/SPAN&gt;&lt;SPAN&gt;ONNX Runtime Training &lt;/SPAN&gt;is&lt;SPAN&gt; inte&lt;/SPAN&gt;&lt;SPAN&gt;grated&lt;/SPAN&gt; with PyTorch so that &lt;SPAN&gt;existing train&lt;/SPAN&gt;ing&lt;SPAN&gt; code &lt;/SPAN&gt;can be directly &lt;SPAN&gt;accelerate&lt;/SPAN&gt;d&lt;SPAN&gt; for &lt;/SPAN&gt;&lt;SPAN&gt;training.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this paper, we will describe some of the key aspects of ORT design and &lt;SPAN&gt;implementation&lt;/SPAN&gt; that enable us&lt;SPAN&gt; to achieve&lt;/SPAN&gt; &lt;SPAN&gt;the distributed training &lt;/SPAN&gt;&lt;SPAN&gt;performance improvements&lt;/SPAN&gt;. We will also use &lt;A href="https://arxiv.org/abs/1810.04805" target="_blank" rel="noopener"&gt;BERT-L&lt;/A&gt; pre-training as the benchmark to illustrate the performance of ORT training. Finally, we will present a case study of training &lt;A href="https://openai.com/blog/better-language-models/" target="_self"&gt;GPT-2&lt;/A&gt;&amp;nbsp;model for code autocompletion feature in Visual Studio&amp;nbsp;&lt;SPAN style="font-style: normal !msorm;"&gt;&lt;EM&gt;&lt;A href="https://visualstudio.microsoft.com/services/intellicode/" target="_blank" rel="noopener"&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;Intelli&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;C&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;ode&lt;/SPAN&gt;&lt;/I&gt;&lt;/A&gt;.&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN&gt;Design and Implementation&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;ONNX Runtime Training is built on the same &lt;/SPAN&gt;&lt;SPAN&gt;&lt;A href="https://github.com/microsoft/onnxruntime" target="_blank" rel="noopener"&gt;open sourced code&lt;/A&gt;&lt;/SPAN&gt; &lt;SPAN&gt;as &lt;/SPAN&gt;&lt;SPAN&gt;the popular inference engine&lt;/SPAN&gt;&lt;SPAN&gt; for ONNX models&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt; Figure 1 shows the &lt;/SPAN&gt;&lt;SPAN&gt;hig&lt;/SPAN&gt;&lt;SPAN&gt;h-&lt;/SPAN&gt;&lt;SPAN&gt;level &lt;/SPAN&gt;&lt;SPAN&gt;architecture &lt;/SPAN&gt;&lt;SPAN&gt;for &lt;/SPAN&gt;&lt;SPAN&gt;ONNX Runtime’s ecosystem.&lt;/SPAN&gt;&lt;SPAN&gt; ORT &lt;/SPAN&gt;&lt;SPAN&gt;is a common runtime&lt;/SPAN&gt; backend&lt;SPAN&gt; that supports multiple framework frontends, such as PyTorch and Tensorflow&lt;/SPAN&gt;&lt;SPAN&gt;/Keras&lt;/SPAN&gt;&lt;SPAN&gt;. &lt;/SPAN&gt;&lt;SPAN&gt;It makes use of the Execution Provider interface to &lt;/SPAN&gt;&lt;SPAN&gt;perform computation on different hardware&lt;/SPAN&gt;&lt;SPAN&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;This enables us to build hardware&lt;/SPAN&gt;-&lt;SPAN&gt;agnostic&lt;/SPAN&gt;,&lt;SPAN&gt; graph&lt;/SPAN&gt;-&lt;SPAN&gt;level optimizations &lt;/SPAN&gt;&lt;SPAN&gt;that are extensible across different platforms&lt;/SPAN&gt;, as well as hardware specific optimization targeting platforms like NVIDIA GPU&lt;SPAN&gt;.&amp;nbsp;&lt;/SPAN&gt;We have also implemented additional optimizations, outlined below, to expedite training for large transformer models.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="margin-left: auto; margin-right: auto; border-style: hidden;" border="1"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SherlockNoMad_0-1589781650044.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192470iAE331AFD83BA4079/image-size/medium?v=v2&amp;amp;px=400" role="button" title="SherlockNoMad_0-1589781650044.png" alt="Figure 1. ONNX Runtime High Level Architecture" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 1. ONNX Runtime High Level Architecture&lt;/span&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H3 class="lia-align-justify"&gt;Static Graph Optimizations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Machine learning models are commonly abstracted as computation graphs. The computation graph used by deep learning frameworks could be either static or dynamic. In the current implementation, ORT has a view of the entire static computation graph. This makes it possible to enable many common graph optimization techniques, such as constant folding, redundant operation elimination, and operator fusion. They are first applied on the forward computation graph before auto differentiation engine builds the backward graph. As ORT has the global knowledge of data dependencies, it only builds the minimal gradient graph that is needed for targeted weights. &lt;SPAN&gt;Consequently&lt;/SPAN&gt;, activation tensors that are not needed for backward computation are automatically dropped after use. With a minimal training graph, it ensures that only essential computation is performed and memory consumption is minimized.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;Memory Usage Optimizations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;Over the last few years, the size of deep learning models has been growing rapidly. GPU memory consumption has become a limiting factor for large model training. ORT has made conscious efforts to preserve and reuse memory whenever possible. For example, ORT reuses the same buffer segments throughout a series of operations, including gradient accumulation, gradient scaling adjustment, allreduce communication and weight update computation (if the optimizer allows). ORT also tries to perform in-place operations if the source tensor is no longer consumed elsewhere in the computation graph. ORT’s kernel implementation also tries to minimize the use of scratch buffers, such as avoid using some memory intensive cuDNN functions, and reusing output buffer as scratch buffer if possible. As a result, ORT can train BERT with 2x the batch size as PyTorch. This enables us to utilize the GPU resources more efficiently, resulting in better performance on the same model and the ability to train larger models.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;ZeRO Stage 1 Integration&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/" target="_blank" rel="noopener"&gt;Zero Redundancy Optimizer (ZeRO)&lt;/A&gt;&lt;/SPAN&gt; &lt;SPAN&gt;is a memory optimization technique&lt;/SPAN&gt; from Microsoft Research. &lt;SPAN&gt;ZeRO is used to&lt;/SPAN&gt; save &lt;SPAN&gt;GPU &lt;/SPAN&gt;memory consumption by eliminating duplicated states across workers during distributed training. ZeRO has three main optimization stages. &amp;nbsp;Currently, &lt;SPAN&gt;ONNX Runtime&lt;/SPAN&gt; implemented Stage 1 of ZeRO. ZeRO Stage 1, known as the optimizer state partitioning, allows ORT to shard the optimizer states, including 1&lt;SUP&gt;st&lt;/SUP&gt; and 2&lt;SUP&gt;nd&lt;/SUP&gt; order moments (and fp32 copy of weights in mixed precision mode), across multiple workers with no extra communication overhead. With ZeRO, ORT can further boost batch size or train a larger model. In BERT-L pre-training, ZeRO allows batch size to further grow from 148 to 168 for phase 1 and from 23 to 27 for phase 2 in a 32GB V100. Distributed checkpointing is also introduced, as model persistent state is distributed across multiple workers. ZeRO can be enabled with a config flag.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;Native Mixed Precision Training Support&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;Unlike PyTorch’s dependency on&lt;/SPAN&gt;&lt;SPAN&gt; &lt;A href="https://github.com/NVIDIA/apex" target="_blank" rel="noopener"&gt;NVIDIA Apex&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN&gt; extension&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;ORT has implemented its own support for mixed precision &lt;/SPAN&gt;&lt;SPAN&gt;training&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt; Mixed precision training can be enabled with a config flag – no other code change needed. Under the hood, ORT converts the static computation graph into mixed precision mode through a series of graph transformations, i.e. running most of the computations in fp16 while keeping some numerically sensitive computation in fp32. ORT supports dynamic loss scaling by automatically inserting the computation nodes for loss scaling into the graph.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;Highly Scaleable Distributed Training&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;ORT seeks to build a unified highly scaleable distributed training framework for hybrid parallelism, including a mixed of data and model parallelisms. ORT supports data parallelism, which is the most popular distributed training mode adopted by many internal teams.&lt;SPAN&gt; We are enhancing&lt;/SPAN&gt; ORT to &lt;SPAN&gt;fully &lt;/SPAN&gt;support training extremely large models (&amp;gt;100 billion parameters). It has an experimental implementation of &lt;A href="https://arxiv.org/abs/1909.08053" target="_blank" rel="noopener"&gt;Megatron&lt;/A&gt;-style horizontal parallelism and &lt;SPAN&gt;we are &lt;/SPAN&gt;actively developing to support pipeline parallelism, such as &lt;A href="https://arxiv.org/abs/1806.03377" target="_blank" rel="noopener"&gt;PipeDream&lt;/A&gt;.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;CUDA Kernel Optimizations&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;ORT has introduced highly optimized CUDA kernels for some key operations including Reductions, Dropout and Softmax. In addition, we have also introduced a few key operator fusions with fused kernels for LayerNormalization, Gelu and their gradients, as well as Lamb Optimizer.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;&lt;SPAN&gt;Us&lt;/SPAN&gt;&lt;SPAN&gt;ing ORT&lt;/SPAN&gt; &lt;SPAN&gt;with &lt;/SPAN&gt;Py&lt;SPAN&gt;T&lt;/SPAN&gt;orch T&lt;SPAN&gt;raining &lt;/SPAN&gt;C&lt;SPAN&gt;ode&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;ONNX Runtime has the capability to train existing PyTorch models through its optimized backend. For this, we have introduced a python API &lt;SPAN&gt;for PyTorch, &lt;/SPAN&gt;called ORTTrainer, which can be used to switch the training backend for PyTorch models (instance of torch.nn.Module) to ORT. This requires some changes from the user, such as replacing the PyTorch optimizer, and optionally, setting flags to enable additional features such as mixed-precision training.&amp;nbsp;Under the hood, as shown in Figure 2, ORTTrainer first converts the PyTorch model to ONNX format through the PyTorch-ONNX exporter. Next, ORT backend takes over and applies graph optimizations, builds a training graph, performs transformations on it as needed (e.g. mixed-precision transformation), and sets up the graph elements needed for distributed training. In this design, while all the computation-intensive workload is offloaded onto the ORT backend, users can still enjoy the rich PyTorch frontend utilities, such as data loading, checkpointing , and easy specification of loss functions.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="margin-left: auto; margin-right: auto; border-style: hidden;" border="1"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SherlockNoMad_1-1589781650047.png" style="width: 977px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192468i1F4E131FC0494015/image-size/large?v=v2&amp;amp;px=999" role="button" title="SherlockNoMad_1-1589781650047.png" alt="Figure 2. Workflow for converting an PyTorch model into an ORT training graph" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 2. Workflow for converting an PyTorch model into an ORT training graph&lt;/span&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-justify"&gt;It is important to note that the current API is experimental and expected to see significant changes in the near future. A new version of the API is under active development. Our goal is to improve the interface to provide more seamless integration with PyTorch training that requires minimal changes in users’ training code, introduce new features, and present a more flexible API to cover advanced scenarios. Please refer to the &lt;A href="https://github.com/microsoft/onnxruntime-training-examples" target="_blank" rel="noopener"&gt;training examples&lt;/A&gt; for more details.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN&gt;Benchmarking&amp;nbsp;&lt;/SPAN&gt;Training Acceleration with ONNX Runtime&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;We now present the performance evaluation of BERT-L pre-training with ONNX Runtime in a 4-node DGX-2 cluster. In AzureML, we also reproduced the pre-training convergence for BERT-Large using sample from &lt;A href="https://github.com/NVIDIA/DeepLearningExamples" target="_blank" rel="noopener"&gt;NVIDIA’s DeepLearningExamplesle’s repo&lt;/A&gt;. We also validated fine tuning accuracy with SQuAD benchmarks.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 class="lia-align-justify"&gt;Benchmarking on DGX-2&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;We compared PyTorch and ORT’s BERT-L training performance on 4 NVIDIA DGX-2 machines (each with 16x 32GB V100) interconnected with InfiniBand. PyTorch’s result was obtained with NGC 20.03-py3 docker image following &lt;A href="https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT#pre-training" target="_self"&gt;Nvidia’s recipe&lt;/A&gt;. ORT’s result was obtained following the same recipe, except that ORT used bigger local batch sizes. As described above, ORT is able to run at a 2x batch size of PyTorch’s. ORT ran at a local batch size of 128 and 16 for phase 1 and 2 respectively, whereas PyTorch ran at batch size of 64 and 8. The effective global batch size remained unchanged in both cases. Overall, ORT achieved throughput improvement of 11.32% and 14.61% for phase 1 and 2. The total time to train was reduces by 11.16%, from 17.74 hours to 15.76 hours.&lt;/P&gt;
&lt;TABLE class="lia-align-justify lia-align-left" style="width: 800px; margin-left: auto; margin-right: auto;"&gt;&lt;CAPTION&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;Tab&lt;/SPAN&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;le &lt;/SPAN&gt;1. &lt;SPAN style="font-weight: normal !msorm;"&gt;Time to train on 4 &lt;/SPAN&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;NVIDIA DGX-2&lt;/SPAN&gt; machines&lt;/CAPTION&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="57px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="57px" class="lia-align-left" style="width: 136px; height: 57px;"&gt;
&lt;P&gt;&lt;STRONG&gt;PyTorch 1.5 with &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;NGC 20.03-py3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="57px"&gt;
&lt;P&gt;&lt;STRONG&gt;PyTorch 1.5 with &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;ONNX Runtime&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="57px" class="lia-align-left" style="width: 127.5px; height: 57px;"&gt;
&lt;P&gt;&lt;STRONG&gt;% Gain with &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;ONNX Runtime&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="30px"&gt;
&lt;P&gt;Phase 1 Throughput (ex/sec)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;11522.1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;12826.2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="30px"&gt;
&lt;P&gt;11.32%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="30px"&gt;
&lt;P&gt;Phase 2 Throughput (ex/sec)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;2150.0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;2464.1&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="30px"&gt;
&lt;P&gt;14.61%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="30px"&gt;
&lt;P&gt;Phase 1 time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;11.12&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;9.99&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="30px"&gt;
&lt;P&gt;10.16%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="30px"&gt;
&lt;P&gt;Phase 2 time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;6.62&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;5.77&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="30px"&gt;
&lt;P&gt;12.84%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="227.5px" height="30px"&gt;
&lt;P&gt;Total time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;17.74&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="136px" height="30px"&gt;
&lt;P&gt;15.76&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="127.5px" height="30px"&gt;
&lt;P&gt;11.16%&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H3 class="lia-align-justify"&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3 class="lia-align-justify"&gt;BERT-L Pre-training on AzureML&lt;/H3&gt;
&lt;P class="lia-align-justify"&gt;We performed BERT-L pre-training on 8x ND40rs_v2 cluster (each with 8x 32GB V100) interconnected with InfiniBand in AzureML. We used the same &lt;A href="https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT#pre-training" target="_self"&gt;Nvidia’s recipe&lt;/A&gt;, expect that we doubled the local batch size in the same way we mentioned above. Mixed precision mode and LAMB optimizer was used throughout the training. As the end of phase 2, we achieved the training loss of 1.31. The end-to-end training time was 18.32 hours.&lt;/P&gt;
&lt;TABLE class="lia-align-justify lia-align-left" style="height: 207px; width: 422px; margin-left: auto; margin-right: auto;" width="422"&gt;&lt;CAPTION&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;Table &lt;/SPAN&gt;2. &lt;SPAN style="font-weight: normal !msorm;"&gt;Time to train on Azure ML with &lt;/SPAN&gt;8x &lt;SPAN style="font-weight: normal !msorm;"&gt;ND40rs_v2&lt;/SPAN&gt;&lt;/CAPTION&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="57px" class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="57px" class="lia-align-left" style="width: 180.5px; height: 57px;"&gt;
&lt;P&gt;&lt;STRONG&gt;PyTorch 1.5 with ONNX Runtime&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="30px" class="lia-align-left"&gt;
&lt;P&gt;Phase 1 Throughput (ex/sec)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="30px"&gt;
&lt;P&gt;10751.4&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="30px" class="lia-align-left"&gt;
&lt;P&gt;Phase 2 Throughput (ex/sec)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="30px"&gt;
&lt;P&gt;2223.7&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="30px" class="lia-align-left"&gt;
&lt;P&gt;Phase 1 Time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="30px"&gt;
&lt;P&gt;11.92&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="30px" class="lia-align-left"&gt;
&lt;P&gt;Phase 2 Time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="30px"&gt;
&lt;P&gt;6.40&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="240.5px" height="30px" class="lia-align-left"&gt;
&lt;P&gt;Total Time (hours)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="180.5px" height="30px"&gt;
&lt;P&gt;18.32&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;Figure &lt;/SPAN&gt;3&lt;SPAN&gt; shows a&lt;/SPAN&gt; loss curve produced in a typical pre-training run. Phase 1 ends with a loss value around 1.4 after 7038 steps. Phase 2 continues with a jump of loss due to switch of sequence length, and &lt;SPAN&gt;it &lt;/SPAN&gt;finally decrease to &lt;SPAN&gt;a &lt;/SPAN&gt;loss value around 1.3.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;­&lt;/P&gt;
&lt;TABLE style="margin-left: auto; margin-right: auto; border-style: hidden;" border="1"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="100%"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SherlockNoMad_0-1589783446513.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192473i5C089F3ED60E1FDC/image-size/large?v=v2&amp;amp;px=999" role="button" title="SherlockNoMad_0-1589783446513.png" alt="Figure 3. ORT BERT-L pre-training loss curves" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 3. ORT BERT-L pre-training loss curves&lt;/span&gt;&lt;/span&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-justify"&gt;The pretrained model is then further finetuned on SQuAD dataset. Both full precision or mixed precision finetuning result in satisfactory Exact Match and F1 scores.&lt;/P&gt;
&lt;TABLE class=" lia-align-justify" style="width: 401px; margin-left: auto; margin-right: auto;" width="401"&gt;&lt;CAPTION&gt;Table 3. BERT-L fine-tuning result on SQuAD Dataset&lt;/CAPTION&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="125px" class="lia-align-left"&gt;
&lt;P&gt;&lt;STRONG&gt;Accuracy Metrics&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="119px" class="lia-align-left"&gt;
&lt;P&gt;&lt;STRONG&gt;Finetuning - FP32&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px" class="lia-align-left"&gt;
&lt;P&gt;&lt;STRONG&gt;Finetuning -&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;mixed precision&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD colspan="2" width="125px"&gt;
&lt;P&gt;Exact Match %&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="119px"&gt;
&lt;P&gt;84.63&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px"&gt;
&lt;P&gt;84.81&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="124px"&gt;
&lt;P&gt;F1 score %&lt;/P&gt;
&lt;/TD&gt;
&lt;TD colspan="2" width="120px"&gt;
&lt;P&gt;91.15&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156px"&gt;
&lt;P&gt;91.32&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;FONT size="4"&gt;A Case Study with Visual Studio using GPT-2 Medium&lt;/FONT&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Visual Studio uses ONNX Runtime to accelerate pre-training a 24-layer &lt;A href="https://openai.com/blog/better-language-models/" target="_blank" rel="noopener"&gt;GPT-2&lt;/A&gt; Medium model to power code autocompletion in the &lt;SPAN style="font-style: normal !msorm;"&gt;&lt;EM&gt;&lt;A href="https://visualstudio.microsoft.com/services/intellicode/" target="_blank" rel="noopener"&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;Intelli&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;C&lt;/SPAN&gt;&lt;/I&gt;&lt;I&gt;&lt;SPAN style="font-weight: normal !msorm;"&gt;ode&lt;/SPAN&gt;&lt;/I&gt;&lt;/A&gt;&lt;/EM&gt;&lt;/SPAN&gt;&amp;nbsp;of Visual Studio. Intellicode serves as a universal programming language compiler, effectively generating syntactically correct code in multiple programming languages, capable of completing an entire line of code in a couple of keystrokes. The training dataset for this task comprises over 1.2 billion lines of source code in Python, C#, JavaScript and TypeScript programming language from 52000 top-starred projects in GitHub.&amp;nbsp;We treat the source code data as a sequence of tokens corresponding to the output of a lexical analyzer.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The training was performed in a DGX-2 cluster. As we use a large sequence length of 1024, the memory usage is very intensive and PyTorch is only able to fit a batch size of 2 on the 32GB V100. ORT achieved 15.8% higher throughput under the identical local batch. As ORT is more memory efficient and able to run at a bigger batch size of 3, it delivered an overall 20.5% of the throughput improvement. As a result, the overall training time is reduced from 202 hours to 168 hours (with 1.2 x higher throughput). The final evaluation metric also achieved the same production shipping bar. &amp;nbsp;&lt;/P&gt;
&lt;TABLE class=" lia-align-left" style="height: 147px; width: 700px; margin-left: auto; margin-right: auto;"&gt;&lt;CAPTION&gt;Table 4. GPT-2 medium pre-training performance.&lt;/CAPTION&gt;
&lt;TBODY&gt;
&lt;TR style="mso-yfti-irow: -1; mso-yfti-firstrow: yes; mso-yfti-lastfirstrow: yes; mso-prop-change: 'Sherlock Huang' 20200517T1612;"&gt;
&lt;TD width="117.5px" height="57px"&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="134.5px" height="57px"&gt;
&lt;P&gt;&lt;STRONG&gt;Batch size / GPU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="165.5px" height="57px"&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Throughput (ex/sec)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="57px"&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Time to train (hours)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="117.5px" height="30px"&gt;
&lt;P&gt;PyTorch&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="134.5px" height="30px"&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="165.5px" height="30px"&gt;
&lt;P&gt;48.7&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="30px"&gt;
&lt;P&gt;202&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="mso-yfti-irow: 1; mso-prop-change: 'Sherlock Huang' 20200517T1612;"&gt;
&lt;TD width="117.5px" height="30px"&gt;
&lt;P&gt;PyTorch + ORT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="134.5px" height="30px"&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="165.5px" height="30px"&gt;
&lt;P&gt;56.4&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="30px"&gt;
&lt;P&gt;174&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="mso-yfti-irow: 2; mso-yfti-lastrow: yes; mso-prop-change: 'Sherlock Huang' 20200517T1612;"&gt;
&lt;TD width="117.5px" height="30px"&gt;
&lt;P&gt;Pytorch + ORT&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="134.5px" height="30px"&gt;
&lt;P&gt;3&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="165.5px" height="30px"&gt;
&lt;P&gt;58.7&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="148px" height="30px"&gt;
&lt;P&gt;160&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;Conclusion&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN&gt;Today, w&lt;/SPAN&gt;&lt;SPAN&gt;e&lt;/SPAN&gt;&lt;SPAN&gt; announced &lt;/SPAN&gt;&lt;SPAN&gt;the preview of training support in &lt;/SPAN&gt;&lt;SPAN&gt;O&lt;/SPAN&gt;&lt;SPAN&gt;NNX Runtime&lt;/SPAN&gt;&lt;SPAN&gt; with &lt;/SPAN&gt;&lt;SPAN&gt;a&lt;/SPAN&gt; &lt;SPAN&gt;focus on&lt;/SPAN&gt; &lt;SPAN&gt;large sc&lt;/SPAN&gt;&lt;SPAN&gt;ale &lt;/SPAN&gt;&lt;SPAN&gt;computation intensive&lt;/SPAN&gt;&lt;SPAN&gt; transformer &lt;/SPAN&gt;&lt;SPAN&gt;models&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt; We have demonstrated that, on a 4 DGX-2 cluster, ONNX Runtime can achieve a throughput gain of 11.32% and 14.61% for BERT-L phase 1 and 2 pre-training over PyTorch. The total training time was reduced by 11.16%, from 17.74 hours to 15.76 hours. ONNX Runtime is able to train BERT-L at a 2x batch size as PyTorch. We have shown a similar 20.5% speedup on a GPT-2 model, saving 34 hours in total training time. ONNX Runtime Training is integrated with PyTorch so that existing PyTorch training code can be directly accelerated for &lt;SPAN&gt;transformer &lt;/SPAN&gt;&lt;SPAN&gt;models training.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;Get Started&lt;/H2&gt;
&lt;P class="lia-align-justify" data-unlink="true"&gt;As a part of the announcement on using ONNX Runtime for training, we have released a Docker image with ORT and made available a repo at &lt;A href="https://github.com/microsoft/onnxruntime-training-examples" target="_blank" rel="noopener"&gt;https://github.com/microsoft/onnxruntime-training-examples&lt;/A&gt; that will host examples for ORT training. The first recipe available in this repo will help you get started with ORT for BERT pretraining in &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning service&lt;/A&gt; or &lt;A href="https://www.nvidia.com/en-us/data-center/dgx-2" target="_blank" rel="noopener"&gt;NVIDIA DGX-2&lt;/A&gt; and see the speedup in action. This recipe shows how to use ONNX Runtime training with BERT pretraining implementation in PyTorch. You can use this example either with the two datasets used in the original implementation or with your custom dataset to pretrain a BERT model and get the performance improvements with ORT reported in this blog. We are planning to add more examples for transformer models and other models. We also welcome your contribution to this repo&amp;nbsp;and feedback to improve ORT training capabilities and experience.&lt;/P&gt;</description>
      <pubDate>Thu, 29 Oct 2020 17:00:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/onnx-runtime-training-technical-deep-dive/ba-p/1398310</guid>
      <dc:creator>SherlockNoMad</dc:creator>
      <dc:date>2020-10-29T17:00:30Z</dc:date>
    </item>
    <item>
      <title>Training deep learning models at scale in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/training-deep-learning-models-at-scale-in-azure/ba-p/1399647</link>
      <description>&lt;P&gt;&lt;EM&gt;This post was co-authored by&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/210609" target="_blank" rel="noopener"&gt;@Chris Lauren&lt;/A&gt;&amp;nbsp;,&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/213932" target="_blank" rel="noopener"&gt;@Ian Finder&lt;/A&gt;&amp;nbsp;,&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/432899" target="_blank" rel="noopener"&gt;@David_Aronchick&lt;/A&gt;&amp;nbsp;,&amp;nbsp;Maxim Lukiyanov, Gopi Kumar&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Microsoft, like much of the tech industry, has adopted deep learning to power features across our business and accelerate the value we can provide for users. We use deep learning models to improve user productivity and provide innovative experiences in Office, power code autocompletion in Visual Studio, improve search results in Bing, predictively optimize availability in Azure compute among many other scenarios across the company.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Part of what allowed us to move quickly to embrace this world changing technology was investing in the infrastructure and services necessary to increase our data scientists and machine learning engineers’ productivity in a cloud-first, highly scalable manner. The combination of our investments, tooling and experience has allowed us to make big advancements in the efficiency of our teams. When we launched Azure Machine Learning to GA 18 months ago, our goal was to bring our productive and powerful environment to users of all skill levels who want to leverage machine learning and accelerate the practice of data science. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, any ML developer can use Azure to experiment with innovative models, run distributed model training and deploy trained models quickly to infuse more intelligence into their business applications. The same machine learning platform, tools and powerful compute are part of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/AA87dvg" target="_blank" rel="noopener noopener noreferrer"&gt;Microsoft’s AI at Scale initiative&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;that is enabling the next generation of AI capabilities&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s take a deeper look at the Azure AI infrastructure and the Azure Machine Learning service which together comprise the machine learning platform we use to power our AI innovations across Bing, Office, Teams and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How Microsoft uses AI at global scale to increase our user's productivity&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;One example of how Microsoft uses the Azure AI infrastructure and Azure Machine Learning service is the new “&lt;A href="https://azure.microsoft.com/en-us/blog/how-azure-machine-learning-service-powers-suggested-replies-in-outlook/" target="_blank" rel="noopener noopener noreferrer"&gt;suggested replies&lt;/A&gt;” feature in Outlook. When you receive an email that can be answered with a quick response, Outlook suggests three responses that you can use to reply quickly, reducing the time and effort involved in replying to an email. This feature is powered by large scale deep learning natural language processing (NLP) model trained in Azure on powerful GPUs using&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/services/machine-learning-service/" target="_blank" rel="noopener noopener noreferrer"&gt;Azure Machine Learning&lt;/A&gt;.&lt;/P&gt;
&lt;DIV id="tinyMceEditorChris Lauren_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorChris Lauren_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="smart-reply.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192682i328F209E57E3497F/image-size/medium?v=v2&amp;amp;px=400" role="button" title="smart-reply.jpg" alt="The Microsoft Outlook &amp;quot;Suggested Replies&amp;quot; feature uses Azure Machine Learning to train deep learning models at scale" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;The Microsoft Outlook "Suggested Replies" feature uses Azure Machine Learning to train deep learning models at scale&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Outlook team uses&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines" target="_blank" rel="noopener noopener noreferrer"&gt;Azure Machine Learning pipelines&lt;/A&gt;&amp;nbsp;to process their data and train their models on a recurring basis in a repeatable manner. During the model training, the team uses GPU pools available in Azure. Once the model is created, data scientists can compare the model performance with previous models and evaluate which approaches perform better at recommending relevant suggested replies. Additionally, by using&amp;nbsp;&lt;A title="accelerated training with ONNX Runtime" href="http://aka.ms/ort-build2020" target="_blank" rel="noopener noopener noreferrer"&gt;accelerated training with ONNX Runtime&lt;/A&gt;&amp;nbsp;they were able to get up to an additional 45% improvement in training performance with minimal changes to their existing model training code. This increased the team's productivity even further by enabling more frequent machine learning experiments.&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Training models at this scale and frequency would not be possible without the scalable Azure AI infrastructure and Azure Machine Learning service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Powerful Azure AI infrastructure&lt;/STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure offers best-in-class infrastructure for AI workloads of all sizes, supporting GPU acceleration for popular frameworks like TensorFlow, PyTorch, and others from a single GPU, up to the flagship NDv2 VM offering eight 32 GB NVIDIA V100 GPUs with NVLink, as well as cluster-level 100 Gigabit InfiniBand EDR with out-of-box NCCL2 support to allow jobs to transparently harness the power of close to 1,000 GPUs concurrently.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure's virtualization technology harnesses features of the underlying hardware to offer nearly identical &amp;nbsp;performance and architectural behavior to bare metal, along with the security and manageability benefits of virtual machines. Even at the driver level, Azure VMs employ standard NVIDIA device drivers, and the same Mellanox OFED InfiniBand RDMA drivers a customer might use on-premise, ensuring a rapid and seamless lift for existing AI workloads to begin leveraging Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Going forward, Azure will continue to invest in the promise of distributed training and interconnects with the latest technologies to deliver higher bandwidth and scale to larger clusters of future GPU products, like NVIDIA’s new A100, with the same commitment to standard communication methods used by workloads such as NCCL2.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning service&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Machine Learning service runs on top of the Azure AI Infrastructure and provides a complete solution to manage the end to end machine learning lifecycle: preparing data, building/training models, deploying the models to cloud or the edge, and monitoring models performance to determine whether to retrain them on new data to improve over time.&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-inline-image-display-wrapper lia-image-align-center"&gt;&lt;SPAN class="lia-inline-image-caption"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Chris Lauren_1-1589806241067.png" style="width: 948px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192546i59F2DC646D60DE7E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Chris Lauren_1-1589806241067.png" alt="Machine learning lifecycle using Azure Machine Learning service" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Machine learning lifecycle using Azure Machine Learning service&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 id="toc-hId-1147390739"&gt;Training large scale deep learning models on a budget&lt;/H3&gt;
&lt;P&gt;Using the latest GPU hardware at scale can be expensive. However, Azure Machine Learning makes it easy to minimize infrastructure costs to meet AI development budgets. AML has cost management and budget controls which enable your&amp;nbsp;teams to share&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute" target="_blank" rel="noopener noopener noreferrer"&gt;AML compute clusters&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;which provision powerful GPU and CPU VMs on demand to train large scale models and turns them off when they are not being used. Additionally, you can further control costs using role based access control and quota management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Pre-built environments for machine learning frameworks&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Data scientists spend a lot of time preparing environments which contain combinations of open source software libraries. These libraries are tested individually, but as data scientists create their own software environment (often in a docker container) they must test the different versions of these libraries on their own to resolve version conflicts. This is a time consuming process which does not directly accrue value to training great models.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure ML reduces this complexity by providing&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-environments" target="_blank" rel="noopener noopener noreferrer"&gt;pre-built environments&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for popular machine learning frameworks, and their popular distributed training flavors. Among supported frameworks are standard PyTorch, TensorFlow, their native distributed training backends, popular distributed training framework Horovod, and variety of communications protocols such as MPI, NCCL or Gloo. Azure Machine Learning makes it easy to explore new distributed models or take existing model from research community and run it as is on Azure compute with minimum or no modifications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additionally, Azure Machine Learning makes it easy to collaborate with other data scientists on your team using our new preview integrated Jupyter Notebooks. This further increases data scientists productivity and leverages&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/nteract/nteract" target="_blank" rel="noopener noopener noreferrer"&gt;nteract&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;to bring Notebooks right into the AML Studio UI.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Chris Lauren_2-1589806241082.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192547iCB5B22AD4C3D29F9/image-size/large?v=v2&amp;amp;px=999" role="button" title="Chris Lauren_2-1589806241082.png" alt="Azure Machine Learning's new preview integrated Jupyter notebooks will soon offer collaboration capabilities" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Azure Machine Learning's new preview integrated Jupyter notebooks will soon offer collaboration capabilities&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 id="toc-hId--660063724"&gt;Scalable experimentation platform&lt;/H3&gt;
&lt;P&gt;Building new machine learning models requires iterative experimentation; sometimes it takes hundreds of iterations over the model design, algorithms and hyperparameters to achieve optimal performance. Some experiments that appear promising initially may yield poor results and researchers will have to step back and reassess results from the previous experiments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As data scientists experiment in this way, it becomes increasingly important to share experiment results easily, reproduce any experiment reliably and collaborate with their team using a platform that is able to scale to handle large models that take days to train and produce GBs of output metrics per run.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure ML provides a fully managed, highly scalable machine learning platform. ML developers can organize their model training runs into experiments, track with each run all of the relevant parameters and metrics of the model, versions of training data used, source code, git commit, hyperparameters and more. With all the relevant data tracked by default, Azure ML makes it easy to compare the experiment runs and determine which produced the best model to deploy.&lt;/P&gt;
&lt;DIV id="tinyMceEditorChris Lauren_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorChris Lauren_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;Get started with&amp;nbsp;&lt;A title="Azure Machine Learning" href="https://azure.microsoft.com/en-us/free/ai/" target="_blank" rel="noopener noopener noreferrer"&gt;Azure Machine Learning&lt;/A&gt;&amp;nbsp;for free today!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/how-azure-machine-learning-enables-powerpoint-designer/" target="_blank" rel="noopener noopener noreferrer"&gt;Learn more about how Azure Machine Learning recommends design layouts PowerPoint Designer&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/extending-the-power-of-azure-ai-to-microsoft-365-users/" target="_blank" rel="noopener noopener noreferrer"&gt;Learn about other ways Azure AI is used in Teams and other Microsoft 365 products&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;A title="Learn more about accelerating deep learning model training using ONNX Runtime " href="http://aka.ms/ort-build2020" target="_blank" rel="noopener noopener noreferrer"&gt;Learn more about accelerating deep learning model training using ONNX Runtime&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://azure.microsoft.com/en-us/blog/bing-delivers-its-largest-improvement-in-search-experience-using-azure-gpus/" target="_blank" rel="noopener noopener noreferrer"&gt;Learn about how Bing uses BERT based NLP models to improve search&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/" target="_blank" rel="noopener noopener noreferrer"&gt;Learn about Turing-NLG, a 17-billion-parameter language model by Microsoft&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 20 May 2020 16:19:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/training-deep-learning-models-at-scale-in-azure/ba-p/1399647</guid>
      <dc:creator>Chris Lauren</dc:creator>
      <dc:date>2020-05-20T16:19:13Z</dc:date>
    </item>
    <item>
      <title>Build 2020 - Conversational AI updates</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-conversational-ai-updates/ba-p/1397685</link>
      <description>&lt;H2&gt;Conversational AI updates that help you build sophisticated and personalized experiences&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, more than ever, developers need to respond to the rapidly increasing demand from customers for support and accurate information - meeting them where they are – any time of the day and on an expanding range of platforms and devices. Within just the last few weeks, Azure AI has met unprecedented demand, underpinning over 1500 Covid-19 related bots via the Microsoft Health Bot service alone, in addition to the over 1.25 billion messages per month already handled by Azure Bot Service.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As part of our key updates for Build 2020, we continue to improve the developer experience and answer the evolving needs of enterprises looking to implement conversational experiences, both employee and customer facing. Significant announcements include the general availability (GA) of Bot Framework Composer, an integrated development tool for building conversational experiences, and the Virtual Assistant solution, an open source solution for building a branded virtual assistant. Azure Bot Service brings a public preview of Alexa integration along with new capabilities for the Language Understanding, Speech and QnA Maker Cognitive Services, including the general availability for container support.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bot Framework Composer is now GA!&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now generally available, Bot Framework Composer is a new open source, visual authoring canvas for developers to design and build conversational experiences. Composer focuses the bot creation process more on conversation design and less on the scaffolding required to begin building awesome bots. Composer easily brings together the common components required to build bots such as the ability to define Language Understanding models, integrate with QnA Maker and build sophisticated composition of bot replies using Language Generation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="73644938-526dee00-469c-11ea-92af-8963c9051e5b.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192598i3B4D55FEB8D2CA94/image-size/large?v=v2&amp;amp;px=999" role="button" title="73644938-526dee00-469c-11ea-92af-8963c9051e5b.png" alt="73644938-526dee00-469c-11ea-92af-8963c9051e5b.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Composer also supports building &lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/skills-conceptual?view=azure-bot-service-4.0" target="_self"&gt;Bot Framework Skills&lt;/A&gt; (bots that can perform a set of tasks for another bot&amp;nbsp;allowing for re-usability and componentizing bot solutions as their complexity and surface area increases. Skills built with Composer can be consumed by other bots built with Composer or using the Bot Framework SDK, as well as &lt;A href="https://docs.microsoft.com/en-us/power-virtual-agents/configuration-add-skills" target="_self"&gt;from Power Virtual Agents&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Find out more and get started with Composer at &lt;A href="https://aka.ms/bfcomposer" target="_blank" rel="noopener"&gt;https://aka.ms/bfcomposer&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Bot Framework SDK v4.9&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Build 2020 sees the release of version 4.9 for the JavaScript, C# and Python SDKs and our commitment, to meet developers where they are continues with the release of Bot Framework Java SDK Preview 4. This latest preview brings the ability to build bots for Microsoft Teams, aligning with the existing capabilities in the JS, C# and Python SDKs, including conversation bots, messaging extensions, and broad API and event coverage for the platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are also excited to make Adaptive Dialogs generally available! Adaptive Dialogs, which underpin the dialog design and management in Composer, enable developers to dynamically update conversation flow based on context and events. This is especially useful when dealing with more sophisticated conversation requirements, such as context switches and interruptions (link to adaptive docs?). Bot Framework Skills can now also leverage Adaptive Dialogs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also available in an early preview is new &lt;A href="https://aka.ms/bfgeneration" target="_self"&gt;Generated Dialog tools&lt;/A&gt;. These new tools can automatically create robust Bot Framework Composer assets from JSON or JSON Schema that implement best practices like out-of-order slot filling, ambiguity resolution, help, cancel, correction and list manipulation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Other significant updates include a Developer Preview of Single Sign-On (SSO) capabilities in Microsoft Teams, answering a common requirement from our customers and, ultimately, reducing friction for end users. A new Health Check API allows for monitoring of bots in Production environments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more details of all of the changes in this latest release, &lt;A href="https://github.com/microsoft/botframework-sdk/issues/5836" target="_self"&gt;see the version 4.9 release notes&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Azure Bot Service&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;A href="https://aka.ms/bf-directline-ase" target="_self"&gt;Direct Line App Service Extension&lt;/A&gt;&amp;nbsp;is now generally available and enables customers to have even greater control over how data is stored and transmitted within their bot using Direct Line or Webchat. Often customers in industries such as banking, medical, legal and others deploy their solution into Virtual Networks (VNETS) which provides networking isolation capabilities. With the Direct Line App Service Extension (Direct Line-ASE), they can now deploy their bot inside the VNET and connect directly to their users’ clients rather than data passing through shared cloud infrastructure. In addition, Direct Line-ASE uses web sockets for communication between client and bot which can reduce latency as well.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A common scenario, repeatedly encountered by our customers, is the need for human handoff as part of a conversation where it is most appropriate, or a customer explicitly asks to speak to a human. Implementing such scenarios within your own bot could, historically, be complex and we are aiming to reduce the implementation time from weeks to minutes by making pre-built integrations for popular customer service platforms available, including LivePerson and Microsoft Omnichannel. Plus, if an existing integration does not already exist developing your own is now much easier with Microsoft now &lt;A href="https://aka.ms/bfhandoff" target="_self"&gt;providing common patterns&lt;/A&gt;, backed by updates to the Bot Framework SDK and protocol.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Whilst Azure Bot Service already provides a broad range of channels, customer requirements continue to evolve leading to demand for additional integrations. As part of responding to these demands, we are pleased to announce the public preview of a new channel for Amazon Alexa Skills, allowing you to build a bot that targets the popular home assistant platform, alongside the existing channels you already build for today. For more details on configuring the new Alexa channel preview, see the &lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-alexa?view=azure-bot-service-4.0" target="_self"&gt;updated Bot Framework docs&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Virtual Assistant 1.0 now generally available&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now generally available, &lt;A href="http://aka.ms/VirtualAssistant" target="_self"&gt;Virtual Assistant Solution Accelerator&lt;/A&gt;, has &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/build-2020-introducing-virtual-assistant-1-0/ba-p/1407833" target="_self"&gt;now reached version 1.0&lt;/A&gt;. Virtual Assistant allows developers to quickly stand up a fully functional Virtual Assistant that can be modified to be their own unique experience.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Virtual Assistant now has been fully moved over to Bot Framework Skills. Virtual Assistant Sample Skills have been moved into their own GitHub repository to allow for easier update to the Virtual Assistant Core as well as the Sample Skills that developers have used in their implementation. Virtual Assistant adds all of the items below together to provide the best starting point for developers looking to quickly provide a Bot that has the core components needed to be able to scale and work right out of the box.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VA.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192564i1D2CB9D7A129A6F2/image-size/large?v=v2&amp;amp;px=999" role="button" title="VA.png" alt="VA.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;Virtual Assistant has also added new capabilities as we move beyond v1.0 to allow developers to see how to leverage &lt;A href="https://aka.ms/bfskillsbuildpreview" target="_self"&gt;Bot Framework Composer to create skills for Virtual Assistant&lt;/A&gt;. This allows developers to unlock the power of Adaptive Dialogs. Virtual Assistant has added 3 new Preview Skills that are Composer / Adaptive versions (Calendar, To Do, Who).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Read a &lt;A href="https://techcommunity.microsoft.com/t5/azure-ai/build-2020-introducing-virtual-assistant-1-0/ba-p/1407833" target="_self"&gt;deeper overview of version 1.0&lt;/A&gt; and get started with Virtual Assistant at &lt;A href="http://aka.ms/VirtualAssistant" target="_blank" rel="noopener"&gt;http://aka.ms/VirtualAssistant&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Speech, Language Understanding and QnA Maker&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure Cognitive Services brings AI within reach of every developer—without requiring machine-learning expertise. At Build 2020, we made several announcements related to new features and improvements across the Cognitive Services used within the Conversational AI eco-system.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Speech service is broadening language coverage and updating Speech to Text and neural Text to Speech with significant accuracy improvements. Additional new capabilities such as custom commands and pronunciation assessment are making it easier for customers to embed advanced speech capabilities into their solutions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Language Understanding service has released a major update to the portal, with a dramatically improved labeling experience, making it easier than ever to build apps and bots that can understand the complex language people tend to use. For example, somebody ordering a pizza might say, “I want two large chicken deep-pan pizzas, a medium pizza with olives and a side of fries.”. This is a complex order, but using the new machine learned entity labelling and decomposition allows you to extract actionable data with ease (full order, quantities, toppings, modifiers and sides). The user has used two different language structures within the same order. This new portal makes it easier to break apart complex requests into related parts.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="luis.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192563i5DE014906E4B8DA2/image-size/large?v=v2&amp;amp;px=999" role="button" title="luis.png" alt="luis.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, Language Understanding, as well as Text Analytics, can now be deployed from the cloud to the edge, with containers support for both services now generally available!&amp;nbsp;For more detail, &lt;A href="https://aka.ms/LUISBlogBuild2020" target="_self"&gt;read the Language Understanding blog&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Bot Framework Orchestrator enters private preview and provides a transformer model based orchestration capability optimized for Conversational AI. This capability helps deliver improved accuracy of Skill based routing critical to more sophisticated conversational experiences, reduced latency and a multi-label classifier enabling multiple intents to be identified from utterances and processed individually. Moving forwards this capability will replace our current Dispatch capability.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The QnA Maker service also receives an update to the editing experience for QnA Knowledgebases, with the &lt;A href="https://aka.ms/rich-text-authoring" target="_self"&gt;addition of rich text editor support&lt;/A&gt;, as well as the addition of &lt;A href="https://aka.ms/role-based-access-control" target="_self"&gt;Role Based Access Control (RBAC)&lt;/A&gt; allowing for greater control and governance of knowledgebase management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="qna.jpg" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192565i22306EE430B3FA04/image-size/large?v=v2&amp;amp;px=999" role="button" title="qna.jpg" alt="qna.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Conversational AI Build sessions and on-demand videos&lt;/H2&gt;
&lt;P&gt;&lt;BR /&gt;We encourage you to find out more about our announcements via our Build sessions, either live or on demand afterwards. We also already have a number of on-demand content available now.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Build Breakout sessions&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conversational AI powered Customer and Employee Virtual Assistants&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;1st session Wednesday May 20th - 2:00 - 2:30 pm PST. &lt;EM&gt;Check the link below for all session times.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mybuild.microsoft.com/sessions?q=INT139" target="_blank" rel="noopener"&gt;https://mybuild.microsoft.com/sessions?q=INT139&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Accelerate bot development in Power Virtual Agents&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;1st session Tuesday May 19th - 3:00 - 3:30 pm PST. &lt;EM&gt;Check the link below for all session times.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mybuild.microsoft.com/sessions?q=INT155" target="_blank" rel="noopener"&gt;https://mybuild.microsoft.com/sessions?q=INT155&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Deploying Voice Assistants for driverless vehicles&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mybuild.microsoft.com/sessions/e5f46ac9-65f4-4e50-a3d4-f76a046ffd51" target="_blank"&gt;https://mybuild.microsoft.com/sessions/e5f46ac9-65f4-4e50-a3d4-f76a046ffd51&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;On demand content&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bot Framework Composer: Bot Framework’s new collaborative Conversational AI development environment&lt;/STRONG&gt;&lt;BR /&gt;&lt;A href="https://youtu.be/r9WQPSaLnaU" target="_blank" rel="noopener"&gt;https://youtu.be/r9WQPSaLnaU&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Use the Efficiency of Low-Code with the Extensibility to Azure to Design World-Class Chatbots&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://youtu.be/oJWJA-U4-m8" target="_blank" rel="noopener"&gt;https://youtu.be/oJWJA-U4-m8&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conversational AI and human agents working together&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://youtu.be/Z0IDiekbOp4" target="_blank" rel="noopener"&gt;https://youtu.be/Z0IDiekbOp4&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Author rich content in QnA Maker knowledge base and enable role based sharing&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DlB9klrdrqOk&amp;amp;data=02%7C01%7CGary.Pretty%40microsoft.com%7C9c5806c2a90544713b8e08d7fce273cb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637255921855057359&amp;amp;sdata=3XgtnLH9afl1M9ddf0fCu48iXttCreIKySuhddCE47g%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;https://www.youtube.com/watch?v=lB9klrdrqOk&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;New features in Language Understanding&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyoutu.be%2FZu6cYF7y9os&amp;amp;data=02%7C01%7CGary.Pretty%40microsoft.com%7C9c5806c2a90544713b8e08d7fce273cb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637255921855057359&amp;amp;sdata=xji%2F%2BGggYI5VF6S58O73ueSf9jUjtm6jBIeiqd%2BARv4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;https://youtu.be/Zu6cYF7y9os&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Self-Driving Vehicle Systems in a Post COVID-19 World&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mybuild.microsoft.com/sessions/6abe0cd4-ee29-4f6b-abdf-521fce76f54e?source=sessions" target="_blank"&gt;https://mybuild.microsoft.com/sessions/6abe0cd4-ee29-4f6b-abdf-521fce76f54e&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 18:06:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-conversational-ai-updates/ba-p/1397685</guid>
      <dc:creator>GaryPrettyMsft</dc:creator>
      <dc:date>2020-05-21T18:06:19Z</dc:date>
    </item>
    <item>
      <title>Build 2020 - Language Understanding (LUIS) new portal, tools and container support</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-language-understanding-luis-new-portal-tools-and/ba-p/1401000</link>
      <description>&lt;P&gt;We’ve listened to the feedback from our customers on how to create more accurate models while making the service even easier to use with several core Language Understanding enhancements. In addition based you your feedback we’re containers generally available in June. Finally, for developers that want to integrate Language Understanding into their CI/CD and release management pipelines, we’re previewing a sample repository template.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I cover these enhancements in more detail on the &lt;A href="https://channel9.msdn.com/Shows/AI-Show/New-Features-in-Language-Understanding" target="_blank" rel="noopener"&gt;AI Show with Seth Juarez&lt;/A&gt;, but have also captured the key points below.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Core language understanding enhancements&lt;/H2&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Upgraded machine learned entities replacing composite and simple entities&lt;/H3&gt;
&lt;P&gt;We’ve introduced the ability to add sub-entities to machine learned entities, going up to 5 levels deep.&amp;nbsp; This replaces composite entities and gives you more power to recognize more sophisticated entities, reuse them across your application and even recognize multiple actions in a single utterance.&amp;nbsp; In addition, this top-down thinking to build a schema is more natural than the bottom up thinking that was needed when creating composites.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have an application that used the old composite entities, you can easily upgrade that app to use the updated machine learned entities to take advantage of this new functionality.&amp;nbsp; This upgrade is seamless for you – you do not need to re-label any of your entities, and no changes are needed in your code.&amp;nbsp; The upgrade experience creates a new version of your application for you to give you the option of testing it separately.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Not only can you build and recognize more sophisticated entities, but an added benefit of sub-entities is that it can improve your model’s accuracy by using entities as features.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Updated portal&lt;/H3&gt;
&lt;P&gt;Another piece of feedback we received is you can lose context when updating your entities and features while labeling.&amp;nbsp; To address this, we’ve added the new entity palette (pictured below) which allows you to see all the ML entities and list entities you’ve created while you’re labeling new utterances.&amp;nbsp; You can also edit your entities and add or edit features while labeling utterances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AliciaEP_0-1589853166675.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192897i126962B3F1F89F69/image-size/large?v=v2&amp;amp;px=999" role="button" title="AliciaEP_0-1589853166675.png" alt="AliciaEP_0-1589853166675.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Screen shot of the new portal entity palette.&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Improved labeling tools&lt;/H3&gt;
&lt;P&gt;We’ve listened to your feedback about difficulties with labeling utterances and there are several changes to the interface in the portal to make this interaction easier. Now you can label entities from both the new entity palette or inline.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When you use the entity palette you can label a child node and the parent will be inferred, and when you label the parent it will automatically merge.&amp;nbsp; Choose the entity labeler tool, then select the entity you want to label for and highlight it in the utterance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="labelwithpalette.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192900iFD519A8DE56167BF/image-size/large?v=v2&amp;amp;px=999" role="button" title="labelwithpalette.gif" alt="labelwithpalette.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Labeling with the new entity palette experience.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For customers that prefer inline labeling, we have improved that as well.&amp;nbsp; Inline labeling supports labeling entities in any order with a cascading menu.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="inlinelabeling.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192901iEB24302119582711/image-size/large?v=v2&amp;amp;px=999" role="button" title="inlinelabeling.gif" alt="inlinelabeling.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Labeling with enhanced inline labeling experience.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition, predictions are shown with a dotted line when a new utterance is added.&amp;nbsp; If all the predictions are correct for a new entity, you can confirm them all in one click.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="predictionandaccept.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192902i3335499CB32900D1/image-size/large?v=v2&amp;amp;px=999" role="button" title="predictionandaccept.gif" alt="predictionandaccept.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Viewing entity predictions and confirming&amp;nbsp;to label.&lt;/EM&gt;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Normalized word forms&lt;/H3&gt;
&lt;P&gt;We have addressed customer issues around recognizing variations of a word, for example, changing 'flight' to 'flights' would show very different results for intent predictions. To solve this, we've added a setting called 'Normalize word forms' that will help your model recognize plurals of a word automatically and generalize better. Currently available in English only, go to your application's settings in the Manage pane to turn it on.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Change from constraints to required features&lt;/H3&gt;
&lt;P&gt;If you were using constraints before, we’ve changed this functionality slightly. You can still constrain the output of a machine learned entity.&amp;nbsp; Now you add a &lt;EM&gt;required&lt;/EM&gt; &lt;EM&gt;feature&lt;/EM&gt; to a machine learned entity, to ensure that entity won’t be predicted without the presence of the required feature.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Container support&lt;/H2&gt;
&lt;P&gt;A frequent customer request has been support for &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support" target="_blank" rel="noopener"&gt;docker containers&lt;/A&gt;.&amp;nbsp; It’s been preview and as of June 1, you can &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-container-howto?tabs=v3" target="_blank" rel="noopener"&gt;deploy and host Language Understanding anywhere&lt;/A&gt; using the GA of docker containers.&amp;nbsp; When hosting the service in a container you have the flexibility to scale as much as you need without any limitations on TPS and you can use Language Understanding in scenarios where you don’t wish to send data to the cloud.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Dev ops sample&lt;/H2&gt;
&lt;P&gt;For developers that want to integrate Language Understanding into their CI/CD and release management pipelines, we’re previewing a sample repository template. This template enables you to develop a Language Understanding application while following DevOps engineering practices that adhere to software engineering fundamentals around source control, testing, CI/CD and release management.&amp;nbsp; You can customize it for use with your own project.&amp;nbsp; &lt;A href="https://github.com/Azure-Samples/LUIS-DevOps-Template" target="_blank" rel="noopener"&gt;Learn more and try it out here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://luis.ai/" target="_blank" rel="noopener"&gt;Get started&lt;/A&gt; with Language Understanding today.&lt;/P&gt;
&lt;P data-unlink="true"&gt;Watch the AI Show with Seth Juarez&amp;nbsp;to see these enhancements in more detail.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=Zu6cYF7y9os" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/Zu6cYF7y9os/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 19:37:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/build-2020-language-understanding-luis-new-portal-tools-and/ba-p/1401000</guid>
      <dc:creator>AliciaEP</dc:creator>
      <dc:date>2020-05-21T19:37:44Z</dc:date>
    </item>
    <item>
      <title>Data Scientists – How you can stay productive while working remotely</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/data-scientists-how-you-can-stay-productive-while-working/ba-p/1392822</link>
      <description>&lt;P&gt;With COVID-19 continuing to impact people and countries around the world and data science teams everywhere are now working remotely, we will be running a series of blogs to help data science teams be productive in the current environment. This blog focusses on how Azure Machine Learning can foster collaboration and productivity when working remotely.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You may have lost access to a powerful workstation or server located at the office where you would normally execute your training jobs and collaborating with other data scientists in your team may have become harder.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;On Azure we offer you two options to help stay productive while working remotely - Azure Machine Learning and the Data Science Virtual Machine. Azure Machine Learning is designed to get you up and running quickly by using built-in notebooks with the choice of your compute that we manage on your behalf. Or if you prefer, managing our own VMs, you can use Data Science Virtual Machine (DSVM) which comes pre-configured and up-to-date ML packages, deep learning frameworks and GPU drivers.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We recommend Azure Machine Learning for data science teams because it provides a fully manged collaborative development environment that is not offered by the Data Science Virtual Machine. Furthermore, Azure Machine Learning separates the compute from your notebooks by automatically mounting a cloud-based file store to host your notebooks. Simply put, this means that you can have different compute sizes without having to move files between machines – for example, you can develop &amp;amp; test some PyTorch code on a CPU compute instance and then switch the compute to a GPU machine to run the code. This architecture also means that you can delete a compute instances without losing your work.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below is a table that outlines the key differences between these two options to help you decide which is the most appropriate for you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Machine Learning&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;A fully managed low hassle way to get up-and-running. Has built-in security and collaboration. &lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;&lt;STRONG&gt;Data Science Virtual Machine&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Unmanaged machine learning workstation. &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Recommended for&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Data science teams and individual data scientists looking for a collaborative environment to accelerate their overall machine learning process&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Individual data scientists that need a friction-free, pre-configured data science environment&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Built-in Collaboration&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Language Support&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Python and R&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Python, R, Julia, SQL, C#, Java, Node.js, F#&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Operating System&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Linux&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Linux and Windows&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Pre-Configured GPU&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Pre-Configured Frameworks&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Scikit, Tensorflow, PyTorch&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Scikit, Tensorflow, PyTorch, Spark (Standalone), Keras, CNTK, MXNet, Chainer, Caffe, Caffe2, Theano&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Hosted Notebooks (notebooks separated from compute)&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Share notebooks with a link&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Built-in SSO for Jupyterlab&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Pre-configured Tools&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Jupyter(lab) and RStudio&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="208"&gt;
&lt;P&gt;Linux: Jupyter(lab), RStudio&lt;/P&gt;
&lt;P&gt;Windows: Jupyter(lab), RStudio, VSCode, Visual Studio CE, Pycharm, Juno, PowerBI, SSMS, H20, LightGBM, Rattle, Vowpal Wabbitt, Weka, XGBoost, Apache Drill, Microsoft Office&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Over the next few sections we will show you how to get started with a Compute Instance or DSVM.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Getting started with Azure Machine Learning’s managed notebooks and compute&lt;/H2&gt;
&lt;P&gt;Firstly, you will need to create an Azure Machine Learning workspace. To create a workspace, you need an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://aka.ms/AMLFree" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN&gt; for free&lt;/SPAN&gt;&amp;nbsp;today.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. Sign in to the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;by using the credentials for your Azure subscription.&lt;/P&gt;
&lt;P&gt;2. In the upper-left corner of Azure portal, select&amp;nbsp;&lt;STRONG&gt;+ Create a resource&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;3. Use the search bar to find&amp;nbsp;&lt;STRONG style="font-family: inherit;"&gt;Machine Learning&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;4. Select&amp;nbsp;&lt;STRONG&gt;Machine Learning&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;5. In the&amp;nbsp;&lt;STRONG&gt;Machine Learning&lt;/STRONG&gt;&amp;nbsp;pane, select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;to begin.&lt;/P&gt;
&lt;P&gt;6. Provide the following information to configure your new workspace:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE width="463"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Field&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;Workspace name&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Enter a unique name that identifies your workspace. In this example, we use&amp;nbsp;&lt;STRONG&gt;docs-ws&lt;/STRONG&gt;. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others. The workspace name is case-insensitive.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;Subscription&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Select the Azure subscription that you want to use.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;Resource group&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution. In this example, we use&amp;nbsp;&lt;STRONG&gt;docs-aml&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;Location&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Select the location closest to your users and the data resources to create your workspace.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;Workspace edition&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;Select&amp;nbsp;&lt;STRONG&gt;Basic&lt;/STRONG&gt;&amp;nbsp;or&amp;nbsp;&lt;STRONG&gt;Enterprise&lt;/STRONG&gt;. This workspace edition determines the features to which you'll have access and pricing. Learn more about&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml#sku" target="_blank" rel="noopener"&gt;Basic and Enterprise edition offerings&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;7. When you're finished configuring the workspace, select&amp;nbsp;&lt;STRONG&gt;Review + Create&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;8. Review the settings and make any additional changes or corrections. When you're satisfied with the settings, select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;9. To view the new workspace, select&amp;nbsp;&lt;STRONG&gt;Go to resource&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Adding team members to the workspace&lt;/H2&gt;
&lt;P&gt;To add team members to the workspace, navigate to Azure Machine Learning resource in the Azure portal and click on &lt;STRONG&gt;Access Control&lt;/STRONG&gt; followed by &lt;STRONG&gt;Add&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192508i06DAFF86D83324A9/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_2.png" alt="pic_2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on &lt;STRONG&gt;Add Role Assignment&lt;/STRONG&gt; and select an appropriate role assignment (e.g. Contributor, Reader, etc) and then search for the user or group to add (by name or email address).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the workspace is provisioned and team members are added, you can access the Azure Machine Learning Studio – an immersive experience for managing the end-to-end machine learning lifecycle in a browser:&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://ml.azure.com" target="_blank" rel="noopener"&gt;https://ml.azure.com&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You will see the following:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192509i507F45CAB68AD9F5/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_3.png" alt="pic_3.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can create, view, edit and execute your notebooks in Azure Machine Learning Studio (&lt;SPAN&gt;&lt;A href="https://ml.azure.com" target="_blank" rel="noopener"&gt;https://ml.azure.com&lt;/A&gt;&lt;/SPAN&gt;) by selecting &lt;STRONG&gt;Notebooks &lt;/STRONG&gt;left-hand menu.&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_4.png" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192506iBA4A97DCDCD1CBDD/image-size/small?v=v2&amp;amp;px=200" role="button" title="pic_4.png" alt="pic_4.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You will see each team member has their own directory to store their notebooks and code. To create a new notebook, click on the &lt;STRONG&gt;File+&lt;/STRONG&gt; button. Provide a filename and select the file type to be a &lt;STRONG&gt;Python Notebook&amp;nbsp;&lt;/STRONG&gt;- for example:&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_5" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_6.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192512i86E2B23C9CF18EED/image-size/medium?v=v2&amp;amp;px=400" role="button" title="pic_6.png" alt="pic_6.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You will need to create compute to edit the file. To do this click on the &lt;STRONG&gt;+ New Compute&lt;/STRONG&gt; button articulated below:&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_7.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192511i3DDD02E42DAD610A/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_7.png" alt="pic_7.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;This will take you to a &lt;STRONG&gt;New Compute Instance&lt;/STRONG&gt; blade when you can enter the name of your compute and VM machine size (there are CPU and GPU machines available). You can the edit the files within Azure Machine Learning Studio:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Alternatively, you&amp;nbsp;can open click on the Jupyter dropdown and select Jupyter(lab). This will take you to Jupyter(lab)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_9.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192513i5E52FF1A7D5034BB/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_9.png" alt="pic_9.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;R users can leverage either Jupyter or have the option of RStudio. To navigate to RStudio head back to the Azure Machine Learning Studio (&lt;SPAN&gt;&lt;A href="https://ml.azure.com" target="_blank" rel="noopener"&gt;https://ml.azure.com&lt;/A&gt;&lt;/SPAN&gt;) and click on &lt;STRONG&gt;Compute&lt;/STRONG&gt;, which will bring up the Compute Instances blade where you will see your compute instance, click on RStudio&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_10.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192515i94A2D92D44AA2CBF/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_10.png" alt="pic_10.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_9" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;This will authenticate you into an RStudio Instance and you will see all your cloud-based notebook and code files.&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_10" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_11.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192517iBC247D6A86EFF5D1/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_11.png" alt="pic_11.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Collaboration&lt;/H3&gt;
&lt;P&gt;Azure Machine Learning provides a shared file system for all users in the workspace, which allows team members to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;share/edit each other’s code&lt;/LI&gt;
&lt;LI&gt;get help&lt;/LI&gt;
&lt;LI&gt;get their code reviewed by a team lead&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In addition, the compute instance comes pre-installed with Git - to clone a Git repository into this file share, we recommend that you create a Compute Instance &amp;amp; &lt;A href="https://docs.microsoft.com/azure/machine-learning/how-to-run-jupyter-notebooks#terminal" target="_self"&gt;open a terminal&lt;/A&gt;. Once the terminal is opened, you have access to a full Git client and can clone and work with Git via the Git CLI experience.&lt;/P&gt;
&lt;P&gt;We recommend that you clone the repository into your user’s directory so that others will not make collisions directly on your working branch.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can clone any Git repository you can authenticate to (GitHub, Azure Repos, BitBucket, etc.)&lt;/P&gt;
&lt;P&gt;For a guide on how to use the Git CLI, read the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://guides.github.com/introduction/git-handbook/" target="_blank" rel="noopener"&gt;git handbook&lt;/A&gt;&lt;/SPAN&gt;.&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Accessing your data on a Compute Instance&lt;/H3&gt;
&lt;P&gt;If your data is on your local machine, then you can upload this Azure Machine Learning and consume this from any compute instance. To do this head to studio (&lt;SPAN&gt;&lt;A href="https://ml.azure.com" target="_blank" rel="noopener"&gt;https://ml.azure.com&lt;/A&gt;&lt;/SPAN&gt;) and select &lt;STRONG&gt;Datasets&lt;/STRONG&gt; from the left-hand menu:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_12.png" style="width: 201px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192516iA06C0EDC15E0FCF2/image-size/medium?v=v2&amp;amp;px=400" role="button" title="pic_12.png" alt="pic_12.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_11" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click on &lt;STRONG&gt;+Create dataset&lt;/STRONG&gt; &amp;gt; from &lt;STRONG&gt;local files&lt;/STRONG&gt;. Choose a name for your dataset and a &lt;STRONG&gt;dataset type&lt;/STRONG&gt; - there are two types, which provide different capabilities:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;A&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py" target="_blank" rel="noopener"&gt;Tabular dataset&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a Pandas or Spark DataFrame.&lt;/LI&gt;
&lt;LI&gt;A&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py" target="_blank" rel="noopener"&gt;File dataset&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;references a single or multiple files in your datastores or public URLs. This provides you with the ability to download or mount the files to your compute.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In our case we are going to upload the Iris dataset, which is Tabular. Click &lt;STRONG&gt;next&lt;/STRONG&gt;. On the next screen (&lt;STRONG&gt;Datastore&lt;/STRONG&gt;&lt;STRONG&gt; and file selection&lt;/STRONG&gt;) you select a cloud based datastore to upload the file to -AzureML automatically creates a cloud Datastore called &lt;STRONG&gt;workspaceblobstore&lt;/STRONG&gt; for you when it is provisioned.&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_12" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_13.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192518i46BB8EA8DD681889/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_13.png" alt="pic_13.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click &lt;STRONG&gt;Next&lt;/STRONG&gt;. On the following screen (&lt;STRONG&gt;Settings&lt;/STRONG&gt;&lt;STRONG&gt; and preview&lt;/STRONG&gt;) Azure Machine Learning will automatically detect the file type and parse the dataset into a table – make any necessary changes to header row, etc and click &lt;STRONG&gt;Next&lt;/STRONG&gt;. Confirm the schema is correct and click &lt;STRONG&gt;Next&lt;/STRONG&gt; followed by &lt;STRONG&gt;Create&lt;/STRONG&gt;. You will see that the data has been loaded into a cloud store and is registered as an asset in Azure Machine Learning workspace:&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_13" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic_14.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192519iBEBB316A438EF4D3/image-size/large?v=v2&amp;amp;px=999" role="button" title="pic_14.png" alt="pic_14.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you click on the name of the Dataset it will bring up the details. When you click on the &lt;STRONG&gt;Consume&lt;/STRONG&gt; tab, you will see something like the following:&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_14" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Inkedpic_15_LI.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192523i990636AABBE34ADF/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Inkedpic_15_LI.jpg" alt="Inkedpic_15_LI.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;Copy the &lt;/SPAN&gt;&lt;STRONG style="font-family: inherit;"&gt;Sample&lt;/STRONG&gt;&lt;STRONG style="font-family: inherit;"&gt; usage&lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt; code into a cell in your own notebook. When you execute that code block you will see the following:&lt;/SPAN&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_16" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Inkedpic_16_LI.jpg" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192524iDC41F2ACFA209469/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Inkedpic_16_LI.jpg" alt="Inkedpic_16_LI.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorsamkemp_17" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Notice that Azure Machine Learning will render the data file into a pandas data frame for you. Other team members in the workspace will also be able to access the data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If your data &lt;EM&gt;already exists&lt;/EM&gt; in the Azure Cloud (Blob, Azure Data Lake, Azure SQL DB/Postgres/MySQL) you can register that datastore in Azure Machine Learning workspace and access data from it. To do this click on &lt;STRONG&gt;Datastores&lt;/STRONG&gt; in Azure Machine Learning Studio &amp;gt; &lt;STRONG&gt;+ New Datastore &lt;/STRONG&gt;&amp;gt; choose a datastore name and select the type. Complete the credentials to access the store (Azure Machine Learning will store these credentials automatically in a secure KeyVault). Follow the same process as above to create a dataset but instead of choosing a local file choose &lt;STRONG&gt;From datastore&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Enterprise security&lt;/H2&gt;
&lt;P&gt;Azure Machine Learning has comprehensive built-in enterprise security features such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;VNET Support&lt;/LI&gt;
&lt;LI&gt;RBAC&lt;/LI&gt;
&lt;LI&gt;Private Link Support&lt;/LI&gt;
&lt;LI&gt;Authentication&lt;/LI&gt;
&lt;LI&gt;Monitoring&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Full details can be gleaned from the &lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;How to Create your Data Science Virtual Machine&lt;/H2&gt;
&lt;P&gt;To create a Data Science Virtual Machine instance:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to the&amp;nbsp;&lt;A href="https://portal.azure.com/" target="_blank" rel="noopener"&gt;Azure portal&lt;/A&gt;&amp;nbsp;You might be prompted to sign in to your Azure account if you're not already signed in.&lt;/LI&gt;
&lt;LI&gt;Find the virtual machine listing by typing in "data science virtual machine" and selecting "Data Science Virtual Machine - Windows 2019” for Windows or "Data Science Virtual Machine- Ubuntu 18.04" for a Linux-based DSVM.&lt;/LI&gt;
&lt;LI&gt;Select the&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;&amp;nbsp;button at the bottom.&lt;/LI&gt;
&lt;LI&gt;You should be redirected to the "Create a virtual machine" blade.&lt;/LI&gt;
&lt;LI&gt;Fill in the&amp;nbsp;&lt;STRONG&gt;Basics&lt;/STRONG&gt;&amp;nbsp;tab:&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Subscription&lt;/STRONG&gt;: If you have more than one subscription, select the one on which the machine will be created and billed. You must have resource creation privileges for this subscription.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Resource group&lt;/STRONG&gt;: Create a new group or use an existing one.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Virtual machine name&lt;/STRONG&gt;: Enter the name of the virtual machine. This is how it will appear in your Azure portal.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Location&lt;/STRONG&gt;: Select the datacenter that's most appropriate. For fastest network access, it's the datacenter that has most of your data or is closest to your physical location. Learn more about&amp;nbsp;&lt;A href="https://azure.microsoft.com/global-infrastructure/regions/" target="_blank" rel="noopener"&gt;Azure Regions&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Image&lt;/STRONG&gt;: Leave the default value.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Size&lt;/STRONG&gt;: This should auto-populate with a size that is appropriate for general workloads. Read more about&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes" target="_blank" rel="noopener"&gt;Windows VM sizes in Azure&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Username&lt;/STRONG&gt;: Enter the administrator username. This is the username you will use to log into your virtual machine, and need not be the same as your Azure username.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Password&lt;/STRONG&gt;: Enter the password you will use to log into your virtual machine.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Review + create&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Review+create&lt;/STRONG&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Verify that all the information you entered is correct.&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG&gt;Create&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;How to access the Data Science Virtual Machine&lt;/H3&gt;
&lt;P&gt;If you provisioned a Windows DSVM follow the steps listed to&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/marketplace/cloud-partner-portal/virtual-machine/cpp-connect-vm" target="_blank" rel="noopener"&gt;connect to your Azure-based virtual machine&lt;/A&gt;&lt;/SPAN&gt;. Use the admin account credentials that you configured in the&amp;nbsp;&lt;STRONG&gt;Basics&lt;/STRONG&gt;&amp;nbsp;step of creating a virtual machine.&lt;/P&gt;
&lt;P&gt;You're ready to start using the tools that are installed and configured on the VM. Many of the tools can be accessed through&amp;nbsp;&lt;STRONG&gt;Start&lt;/STRONG&gt;&amp;nbsp;menu tiles and desktop icons.&lt;/P&gt;
&lt;P&gt;If you provisioned an Ubuntu DSVM, then you can access the VM in one of three ways:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;SSH for terminal sessions&lt;/LI&gt;
&lt;LI&gt;X2Go for graphical sessions&lt;/LI&gt;
&lt;LI&gt;JupyterHub and JupyterLab for Jupyter notebooks&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Follow the guidance on the&amp;nbsp;&lt;SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro#how-to-access-the-ubuntu-data-science-virtual-machine" target="_blank" rel="noopener"&gt;how to access an Ubuntu DSVM page&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;for further details on how to access using these methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We hope the guidance provided in this blog will you get started&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 18 May 2020 17:31:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/data-scientists-how-you-can-stay-productive-while-working/ba-p/1392822</guid>
      <dc:creator>samkemp</dc:creator>
      <dc:date>2020-05-18T17:31:00Z</dc:date>
    </item>
    <item>
      <title>CustomVision: Accelerating a model with ONNX Runtime on a CPU, GPU or Movidius Neural Compute Stick</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/customvision-accelerating-a-model-with-onnx-runtime-on-a-cpu-gpu/ba-p/1394275</link>
      <description>&lt;P&gt;While I have written before about the speed of the Movidius:&amp;nbsp;&lt;A href="https://kevinsaye.wordpress.com/2019/02/23/up-and-running-with-a-movidius-container-in-just-minutes-on-linux/" target="_blank" rel="noopener"&gt;Up and running with a Movidius container in just minutes on Linux&lt;/A&gt;, there were always challenges “compiling” models to run on that ASIC.&amp;nbsp; Since that blog, Intel has been fast at work with&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.openvinotoolkit.org/" target="_blank" rel="noopener"&gt;OpenVINO&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and Microsoft has been contributing to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://microsoft.github.io/onnxruntime/" target="_blank" rel="noopener"&gt;ONNX&lt;/A&gt;.&amp;nbsp; Combining these together, we can now use something as simple as a Model created in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://customvision.ai/" target="_blank" rel="noopener"&gt;http://customvision.ai&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;at the edge running in OpenVINO for acceleration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Confused?&amp;nbsp; The following block diagram shows the relationship:&lt;/P&gt;
&lt;CENTER&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192166iFA49DB8657EF32D7/image-size/large?v=v2&amp;amp;px=999" role="button" title="1.png" alt="1.png" /&gt;&lt;/span&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;On just standard hardware and with a RTSP camera, I am able to score 3.4 frames per second on 944 x 480 x 24 bit images with version 1 of the Compute Stick or 6.2 frames per second with version 2.&amp;nbsp; While I can get something close to this using CPU, the Movidius frees the CPU and allows multiple “calling applications” where the CPU performance is limited to just one.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="margin-left: auto; margin-right: auto;" width="100%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="99" class="lia-align-center"&gt;OpenVINO Tag&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;Hardware&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;FPS&amp;nbsp;from RTSP&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;FPS Scored&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;CPU Average&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;Memory&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="99" class="lia-align-center"&gt;CPU_FP32&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;4 @ Atom 1.60 GHz (E3950)&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;25&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;3.43&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;300% (of 400%)&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;451 MB&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD class="lia-align-center"&gt;GPU_FP16&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;
&lt;P&gt;Intel® HD Graphics 505&lt;/P&gt;
&lt;P&gt;on E3950&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;25&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;6.3&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;70% (of 400%)&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;412 MB&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD class="lia-align-center"&gt;GPU_FP32&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;
&lt;P&gt;Intel® HD Graphics 505&lt;/P&gt;
&lt;P&gt;on E3950&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;25&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;5.5&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;75% (of 400%)&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;655 MB&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="99" class="lia-align-center"&gt;MYRIAD_FP16&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;Neural Compute Stick&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;25&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;3.6&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;20% (of 400%)&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;360 MB&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="99" class="lia-align-center"&gt;MYRIAD_FP16&lt;/TD&gt;
&lt;TD width="206" class="lia-align-center"&gt;Neural Compute Stick version 2&lt;/TD&gt;
&lt;TD width="70" class="lia-align-center"&gt;25&lt;/TD&gt;
&lt;TD width="74" class="lia-align-center"&gt;6.2&lt;/TD&gt;
&lt;TD width="101" class="lia-align-center"&gt;30% (of 400%)&lt;/TD&gt;
&lt;TD width="59" class="lia-align-center"&gt;367 MB&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;More info here:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://software.intel.com/content/www/us/en/develop/articles/get-started-with-neural-compute-stick.html" target="_blank" rel="noopener"&gt;https://software.intel.com/content/www/us/en/develop/articles/get-started-with-neural-compute-stick.html&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and most of this work is based off this reference implementation:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/README-ONNXRUNTIME-OpenVINO.md" target="_blank" rel="noopener"&gt;https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/README-ONNXRUNTIME-OpenVINO.md&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Similar to the reference implantation above, I base this approach using a Docker Container, allowing this to be portable and deployed as an Azure IoT Edge Module.&amp;nbsp; Note, while OpenVINO, ONNX and Movidius are supported on Windows, exposing the hardware to a container is only supported on Linux.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in CustomVision.AI, create and train a model, then export it as ONNX.&amp;nbsp; &lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;Note 6/22/2020: Use the “General (compact)” and not the “General (compact) [S1]”, as the second Domain currently does not work as expected.&lt;/SPAN&gt; &lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192164iAF82D9E52947F838/image-size/large?v=v2&amp;amp;px=999" role="button" title="2.png" alt="2.png" /&gt;&lt;/span&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Using Git, clone the following repository:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/ksaye/CustomVisionWithMovidius.git" target="_blank" rel="nofollow noopener"&gt;https://github.com/ksaye/CustomVisionWithMovidius.git&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Modify&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/ksaye/CustomVisionWithMovidius/blob/master/Dockerfile#L67" target="_blank" rel="noopener"&gt;line 67&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of the DockerFile to reflect the URL of your exported ONNX zip file.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;On Linux with Docker or Moby installed at a command prompt where you cloned the repository, run as shown below.&amp;nbsp; This will take a while.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;docker build --rm -t customvisionwithmovidius --network host .&lt;/PRE&gt;
&lt;CENTER&gt;&lt;/CENTER&gt;&lt;CENTER&gt;&lt;/CENTER&gt;&lt;CENTER&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192167iEE64C00E2098DAC9/image-size/large?v=v2&amp;amp;px=999" role="button" title="3.png" alt="3.png" /&gt;&lt;/span&gt;&lt;/CENTER&gt;&lt;CENTER&gt;&lt;/CENTER&gt;&lt;CENTER&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;On the host Linux PC, run the following commands to ensure that the application has access to the USB or Integrated MyriadX ASIC:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;sudo usermod -a -G users "$(whoami)"

sudo echo SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1" &amp;gt; /etc/udev/rules.d/97-myriad-usbboot.rules
sudo echo SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1" &amp;gt;&amp;gt; /etc/udev/rules.d/97-myriad-usbboot.rules
sudo echo SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1" &amp;gt;&amp;gt; /etc/udev/rules.d/97-myriad-usbboot.rules

sudo udevadm control --reload-rules

sudo udevadm trigger

sudo ldconfig&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;With the image built, create an instance running the following command to create the container, start the container and monitor the log files.&amp;nbsp; Note this web service listens on port 87 by default.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;docker create --net=host --privileged -v /dev:/dev --name customvision customvisionwithmovidius &amp;amp;&amp;amp; docker start customvision &amp;amp;&amp;amp; docker logs -f customvision&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 6:&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;To send an image to the web service, simply run the following curl command, replacing&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;EM&gt;1.jpg&lt;/EM&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;for your image.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;curl -X POST http://127.0.0.1:87/image -F imageData=@1.jpg&lt;/PRE&gt;
&lt;P&gt;Looking at the screen, I see the Myriad was found, and the ONNX model was supported and loaded.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="4.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192168i9507BE293B7E355E/image-size/large?v=v2&amp;amp;px=999" role="button" title="4.png" alt="4.png" /&gt;&lt;/span&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Because I have a process sending images to the web service as fast as it will take them, note that we see below multiple inferences per second.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="5.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/192169i210011A620AD8D11/image-size/large?v=v2&amp;amp;px=999" role="button" title="5.png" alt="5.png" /&gt;&lt;/span&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Azure IoT Edge:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;While the goal of this blog was to show the foundational capabilities, the sample code can be adapted to run at the intelligent (Azure IoT) Edge, as shown here:&amp;nbsp;&lt;A href="https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/README-ONNXRUNTIME-OpenVINO.md" target="_blank" rel="noopener"&gt;https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/README-ONNXRUNTIME-OpenVINO.md&lt;/A&gt;.&amp;nbsp; Once you have the foundation, the possibilities are endless!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Summary:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With the integration of ONNX and OpenVINO, Microsoft and Intel have really expanded the hardware acceleration platform and have made it easy for developers to adopt.&amp;nbsp; While in the past you might have approached challenges with raw CPU power, you can actually get better performance and not at the expense of other processes by adding a cost effective hardware accelerator, like Intel's MyriadX.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Add to this the simplicity of &lt;A href="http://www.customvision.ai" target="_blank" rel="noopener"&gt;http://www.customvision.ai&lt;/A&gt;&amp;nbsp;and you can build some amazingly fast AI solutions at the edge at a fraction of the hardware and software cost.&amp;nbsp; Combining IoT Edge and have have amazing speed, simplified AI models managed at the Intelligent Edge.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jun 2020 10:59:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/customvision-accelerating-a-model-with-onnx-runtime-on-a-cpu-gpu/ba-p/1394275</guid>
      <dc:creator>KevinSaye</dc:creator>
      <dc:date>2020-06-22T10:59:10Z</dc:date>
    </item>
    <item>
      <title>Get started with automating form processing to enable organizations’ productivity</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/get-started-with-automating-form-processing-to-enable/ba-p/1387305</link>
      <description>&lt;P&gt;&lt;FONT size="3"&gt;&lt;EM&gt;This blog has been authored by Neta Haiby (Principal PM, Form Recognizer) and Prachi Jain (PMM, Azure AI)&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Healthcare organizations, hospitals, government agencies are at the forefront tackling COVID-19 responses. As the fight for the pandemic continues, so does the challenges in extracting and processing information, providing quick responses, and maintaining efficiency in processes.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Forms are everywhere and various scenarios require data extraction from forms like unemployment claims, employee sick leave, loan and mortgage applications, COVID-19 relief paperwork, clinical forms and more. Extracting data from these forms today is mostly manual which takes long processing cycles. Automating the data extraction enables companies to quicken the processing time enabling productivity and helps saves cost.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;As an example, we have analyzed a credit card authorization form. A credit card authorization form allows a 3rd party to make a payment by using a person’s or companies written consent and credit card information. This can either be for a 1-time charge or recurring (weekly, monthly, etc.) and is used in by insurance companies, on-boarding new patients and more. Automatic extraction of the data from these forms enables companies to speed up on-boarding new customers and reduces the processing time.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;The following credit card authorization forms contains fictitious content for illustrative purposes.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog 0.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191421iFBDE5D58FBE12498/image-size/large?v=v2&amp;amp;px=999" role="button" title="blog 0.png" alt="blog 0.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;To extract data from forms Form Recognizer enables you to get started with 5 forms to train a custom model and label the values of interest to extract the data you need. &amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog1.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191419i2374B77E5750505A/image-size/large?v=v2&amp;amp;px=999" role="button" title="blog1.png" alt="blog1.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorNetaH_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorNetaH_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;A href="https://aka.ms/form-recognizer" target="_blank" rel="noopener"&gt;Form Recognizer&lt;/A&gt; enables you to extract the values of interest from these forms. With &lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;Form Recognizer Sample Labeling Tool&lt;/A&gt; you can easily and quickly label the values of interest such as name, souse name, phone, address, transcript required and more and train a model to extract the data. You then can use this model to analyze all incoming forms and automatically extract the data as part of your workflow or Robotic Process Automation solution.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;How to Label, Train and Analyze Forms – &lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;You will need a set of at least six forms of the same type (same structure \ format). You'll use this data to train the model and test a form.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&amp;nbsp;Go to the &lt;A href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" target="_blank" rel="noopener"&gt;Azure Portal and create a Form Recognizer resource&lt;/A&gt; if you don’t already have one.&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&amp;nbsp;&lt;STRONG&gt;Label your forms&lt;/STRONG&gt; using the Form Recognizer Sample Labeling tool. You can use the &lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;try out site here&lt;/A&gt; or deploy it locally or in the cloud (&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/deploy-label-tool" target="_blank" rel="noopener"&gt;How to deploy Form Recognizer Sample Labeling Tool&lt;/A&gt;). To create a new project see the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool#set-up-input-data" target="_blank" rel="noopener"&gt;Train with Labels Quick Start Guide – Setup input data.&lt;/A&gt; For more information on how to label forms see also the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool#label-your-forms" target="_blank" rel="noopener"&gt;Train with Labels QuickStart guide – Label Your Forms.&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Train a custom model&lt;/STRONG&gt;, click the Train icon on the left pane to open the Training page. Then click the&amp;nbsp;Train&amp;nbsp;button to begin training the model. For more information on how to train a custom model see also the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool#train-a-custom-model" target="_blank" rel="noopener"&gt;Train with Labels QuickStart guide – Train a Custom Model.&lt;/A&gt; You can also train a custom model using the &lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/TrainCustomModelAsync" target="_blank" rel="noopener"&gt;Form Recognizer Train Custom Model API.&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog3.png" style="width: 351px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191422iD220750A388E5B76/image-size/medium?v=v2&amp;amp;px=400" role="button" title="blog3.png" alt="blog3.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3"&gt;5.&amp;nbsp;&lt;STRONG&gt;Analyze a Form&lt;/STRONG&gt; using your custom model, Click on the Predict (light bulb) icon on the left to test your model. Upload a form document that you haven't used in the training process. Then click the&amp;nbsp;Predict&amp;nbsp;button on the right to get key/value predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag. You can also analyze using the &lt;A href="https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm" target="_blank" rel="noopener"&gt;Form Recognizer Analyze Form API.&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog4.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191423i384CF17D59950F38/image-size/large?v=v2&amp;amp;px=999" role="button" title="blog4.png" alt="blog4.png" /&gt;&lt;/span&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;H3&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Learn more about how customers have built solutions for COVID-19 for data extraction using Microsoft Computer Vision and Form Recognizer –&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Ernst &amp;amp; Young&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;EY US developed an innovative Paycheck Protection Program (PPP) Loan Forgiveness Platform using Microsoft cloud and Form Recognizer. This enables banks at this critical juncture in the US economic recovery to efficiently meet the increasing demands across the end-to-end lending process required by the unique provisions outlined under the CARES Act. Learn more&amp;nbsp;&lt;A href="https://www.prnewswire.com/news-releases/ey-us-develops-innovative-paycheck-protection-program-ppp-loan-forgiveness-platform-using-microsoft-cloud-301044394.html" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Automation Anywhere&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Automation Anywhere has developed a solution to help accelerate the reporting and processing of many complex forms, including CRFs and E28.This highly secure solution, comprised of Automation Anywhere RPA with native Intelligent Document Processing (IDP) and Azure Cognitive Services Computer Vision API and Form Recognizer. Learn more &lt;A href="https://www.automationanywhere.com/company/press-room/automation-anywhere-launches-new-rpa-solutions-to-respond-to-global-covid-19-pandemic" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;LI-VIDEO vid="https://www.youtube.com/watch?v=GQbyX4QrziY" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/GQbyX4QrziY/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Additional Resources&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="3"&gt;Get started with deploying Form Recognizer –&lt;/FONT&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Custom Model&lt;/STRONG&gt; – extract text, tables and key value pairs&lt;/FONT&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model and extract form data by using the REST API &lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool" target="_blank" rel="noopener"&gt;QuickStart: Train a Form Recognizer model with labels using the sample labeling tool&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Form Recognizer Sample Labeling Tool&amp;nbsp;&lt;/STRONG&gt;&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Try it out: &lt;A href="https://fott.azurewebsites.net/" target="_blank" rel="noopener"&gt;https://fott.azurewebsites.net/&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;Open Source project: &lt;A href="https://github.com/microsoft/OCR-Form-Tools" target="_blank" rel="noopener"&gt;https://github.com/microsoft/OCR-Form-Tools&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Prebuilt receipts - &lt;/STRONG&gt;extract data from USA sales receipts&lt;/FONT&gt;&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-receipts" target="_blank" rel="noopener"&gt;Quickstart: Extract receipt data using the REST API&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Layout - &lt;/STRONG&gt;extract text and table structure (row and column numbers) from your documents&lt;/FONT&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-layout" target="_blank" rel="noopener"&gt;Quickstart: Extract layout data using the REST API &lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;FONT size="3"&gt;See &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/whats-new" target="_blank" rel="noopener"&gt;What’s New&lt;/A&gt;&lt;/FONT&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2020 19:23:55 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/get-started-with-automating-form-processing-to-enable/ba-p/1387305</guid>
      <dc:creator>NetaH</dc:creator>
      <dc:date>2020-05-13T19:23:55Z</dc:date>
    </item>
    <item>
      <title>MLOps is Not Enough</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/mlops-is-not-enough/ba-p/1386789</link>
      <description>&lt;P&gt;&lt;FONT size="7"&gt;MLOps is Not Enough&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;The Need for an End-to-End Data Science Lifecycle Process&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you've ever worked on (or with) a data science team, you know that consistently delivering value can be frustrating (to put it nicely). There are so many places where things can go wrong and projects can fail. It has almost become a cliché to talk about the high failure rates of data science projects. However, given the demonstrated value that AI and Data Science have shown across industries, it's a problem that needs to be solved. There's just too much value to leave on the table. The division between successful companies and those who fall behind will be largely influenced by the success of their data science capabilities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In response to this, it seems almost everyone is jumping on the MLOps train, and with good reason. MLOps has finally given us a way to consistently deploy, monitor, and retrain our models at scale. It's becoming clear that MLOps will be a required component of any successful data science team. So why do I say it’s not enough?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It's my view that MLOps on its own won't deliver. It will be transformational for making your existing models more robust, easier to retrain and monitor, etc. But what about new projects and new models? MLOps starts with a model, which means you've already found a model that works and now you want to enter the MLOps loop. Train, register, deploy, monitor, retrain, repeat.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The reality is that most teams and organizations struggle getting to that point consistently. Beyond deployment difficulties and risks, there are several other key areas where things go wrong:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Solving the wrong problems&lt;/LI&gt;
&lt;LI&gt;Building models that don't map well to business processes&lt;/LI&gt;
&lt;LI&gt;Bad assumptions about the data or a mismatch in population&lt;/LI&gt;
&lt;LI&gt;Converting the results of your experimentation into a production ready model&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Figure 1: An Internet Famous MLOps Diagram (with annotations)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="charleswm_0-1589382471621.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191361i90185017A8FB165D/image-size/large?v=v2&amp;amp;px=999" role="button" title="charleswm_0-1589382471621.png" alt="charleswm_0-1589382471621.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We've seen all of these kill data science projects well before teams got to the stage where they'd even think about deployment. The good news is that while data science is experimental in nature, it's not random, which means we can identify ways to account for these common patterns.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Introducing the Data Science Lifecycle Process&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In a&amp;nbsp;&lt;A href="https://cloudblogs.microsoft.com/industry-blog/microsoft-in-business/ai/2020/03/23/from-idea-to-value-a-process-for-managing-the-data-science-lifecycle-in-the-enterprise/" target="_blank" rel="noopener"&gt;previous article&lt;/A&gt;, I talked about the need for teams to create processes that cover the end-to-end data science process. We knew that MLOps would be a critical component, but based on our experience working with many data science teams, we still felt that there was a gap in the process when it came from going for the ideation phase to the point where you had a model you were ready to build and deploy. We dubbed what we came up with the Data Science Lifecycle Process (lovingly referred to as the DSLP).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’re happy to announce that we’ve open-sourced this process so that every data science team can start improving their processes immediately. We’ve documented the process and created issue templates and repos and it’s all available on GitHub in the&amp;nbsp;&lt;A href="https://github.com/dslp/dslp-repo-template" target="_blank" rel="noopener"&gt;DSLP repo.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The DSLP is designed to break down the siloes between data scientists, developers, IT, and the business. Data science projects are cross-functional by nature. This means we need to bridge the gap between the (often ad-hoc) experimental workflows of data scientists and the more systematic approach of engineering teams. We've attempted to do this by creating a branching strategy, issue templates, and workflow patterns that establish clear boundaries and handoff points from the model development process to the implementation and deployment process. With a clear pivot point, it becomes easy to apply all the best parts of MLOps to the implementation and deployment process, while still giving data scientists the flexibility they need in the problem framing, experimentation, and development parts of the process.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Figure 2: The Phases of an ML Project and the Roles Involved&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="charleswm_1-1589382471630.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191360i792CBDE2B4F21310/image-size/large?v=v2&amp;amp;px=999" role="button" title="charleswm_1-1589382471630.png" alt="charleswm_1-1589382471630.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The need for a process like this is likely apparent to anyone who has worked on delivering data science projects in an enterprise environment. The friction between data science teams and just about everyone else is generally pretty high and leads to a lot of throwing things over the wall. It's not good folks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Feedback on the DSLP So Far&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As we developed the DSLP, we worked with several teams to test how well the process performed on real data science projects. We spoke with&amp;nbsp;&lt;A href="https://www.linkedin.com/in/cameronvetter/" target="_blank" rel="noopener"&gt;Cameron Vetter&lt;/A&gt;&amp;nbsp;(an ML Engineer) and&amp;nbsp;&lt;A href="https://www.linkedin.com/in/carolynolsen/" target="_blank" rel="noopener"&gt;Carolyn Olsen&lt;/A&gt;&amp;nbsp;(a Data Scientist) from&amp;nbsp;&lt;A href="https://octaviantg.com/" target="_blank" rel="noopener"&gt;Octavian Technology Group&lt;/A&gt;&amp;nbsp;about the challenges they've seen enterprise data science teams face and how the DSLP addresses many of them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Carolyn’s perspective:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In my experience, there are two places where existing data science processes really break down. The first is that the traditional software development processes don’t fit data science well, because data science is such a non-linear process. Workflows can quickly spread out like a hydra’s head. After a few weeks of work, data scientists may struggle to replicate exactly what they did along the way, or can get lost down analytical rabbit holes. DSLP makes non-linear data science processes focused and reproducible, by linking exploration and modeling experiments artifacts directly with Issues describing exactly what they’re meant to accomplish and results.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The second struggle many data scientists have is the pain of building a great model then seeing it “sit on a shelf,” never getting into production. Like agile project management, DSLP helps keep work focused on business goals, increasing likelihood of stakeholder buy-in. It also facilitates hand-off from data scientists to the engineers getting the model into production, by giving data scientists a structured way to hand off code and documentation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Cameron’s perspective:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Data Science projects often have a disconnect between the engineers and data scientists.&amp;nbsp;These two groups work in vastly different ways, and often struggle to sync their efforts.&amp;nbsp;Engineers usually work within SDLC processes using them to align their teams towards the same goal.&amp;nbsp;Data Scientists tend to be more experimental in their work.&amp;nbsp;A Data Scientist will often go down a path and completely abandon it, starting down a new path many times during a project.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This experimental nature often leads Data Scientists to follow an ad-hoc process, making it difficult to hand off their work to ML engineers.&amp;nbsp;By the time the work is handed off to the engineers, the Data Scientists are unable to explain why certain decisions were made around modeling, data shaping, and data enhancement.&amp;nbsp;This can lead to a lot of throw-over-the-wall deployments where engineers are making decisions without understanding the how or the why behind what they are implementing.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The DSLP adds process to this experimental phase and does it within familiar SDLC tools that the engineers are comfortable with.&amp;nbsp;This allows engineers to use the documented issues combined with the branching strategy to understand the flow of what happened prior to the hand off. This understanding will impact how this model is brought to production.&amp;nbsp;This enables them to collaboratively iterate with the Data Scientists as they productionize the model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;What’s Next?&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We’re going to continue building out this process as we continue to work on the projects we do with our customers and partners. We’re sure that what we’ve built isn’t perfect, but from what we’ve seen it can create a major positive impact on data science teams. &amp;nbsp;Try it for yourself by implementing the&amp;nbsp;&lt;A href="https://github.com/dslp/dslp/blob/main/branching/branch-types.md" target="_blank" rel="noopener"&gt;branching strategy&lt;/A&gt;&amp;nbsp;and using the&amp;nbsp;&lt;A href="https://github.com/dslp/dslp/blob/main/issue-types/0-overview-issue-types.md" target="_blank" rel="noopener"&gt;issue templates&lt;/A&gt;&amp;nbsp;on your next project. Keep on the look-out for more content from us as we continue to develop, document, and evangelize this process.&lt;/P&gt;
&lt;P&gt;Feedback is welcome and as an open-source initiative we hope to create a vibrant community over time. If you want to learn more, test it out, or engage with us on implementing or improving this process, feel free to open an issue on GitHub or email us at&amp;nbsp;&lt;A href="mailto:dslp@microsoft.com" target="_blank" rel="noopener"&gt;dslp@microsoft.com&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/dslp/dslp" target="_blank" rel="noopener"&gt;https://github.com/dslp/dslp&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jun 2020 19:18:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/mlops-is-not-enough/ba-p/1386789</guid>
      <dc:creator>charleswm</dc:creator>
      <dc:date>2020-06-22T19:18:59Z</dc:date>
    </item>
    <item>
      <title>How to build chatbots that deliver better customer experiences and help support a surge in inquiries</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-chatbots-that-deliver-better-customer-experiences/ba-p/1374659</link>
      <description>&lt;H1&gt;How to build chatbots that deliver better customer experiences and help support a surge in inquiries&lt;/H1&gt;
&lt;P&gt;&lt;EM&gt;This blog has been authored by Jim Lewallen (Principal PM, Conversational AI) and Will Mendoza (Senior PMM, Azure AI)&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Organizations globally are seeing a significant increase in demand from customers looking for support and accurate information. As a result, developers are building chatbots that address a range of scenarios- from simple to sophisticated, to better serve customers and communities. An example of a simple scenario is informational Q&amp;amp;A chatbots that help answer frequently asked questions. Sophisticated scenarios can include branded virtual assistants for your organization that make people more productive by assisting with common tasks like scheduling meetings or making a reservation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Before diving into how to build these types of chatbots, let us quickly walk through and define some of the key Azure AI components that you will need to build a chatbot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Key Azure AI services and tools&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Bot Framework&lt;/STRONG&gt; is the open source SDK and tools for developers to design, build and test chatbots. If you want full control of your chatbot, including building your own language models, you will want to start here.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Bot Service&lt;/STRONG&gt; is the cloud service through which developers can host a chatbot in Azure, and quickly connect to popular channels such as Teams, Skype, Slack, email, and webchat, as well as community adapters for other channels like Alexa and Google Assistant ecosystems.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="font-family: inherit;"&gt;Azure Cognitive Services &lt;/STRONG&gt;&lt;SPAN style="font-family: inherit;"&gt;are a comprehensive family of AI services that enable you to build intelligent applications. Common examples of Cognitive Services for chatbots include Language Understanding to understand the meaning of utterances from users and QnA Maker to convert FAQ documents into conversational question and answer experiences.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Simple informational Q&amp;amp;A chatbot&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the first types of chatbots you can build to get started quickly are simple informational Q&amp;amp;A bots. These chatbots can be used to alleviate strained resources that are answering the same basic questions. By implementing these types of chatbots, organizations can scale to more easily answer frequently asked questions in a cost effective manner, while enabling specialists to handle more nuanced requests.&amp;nbsp; For example, &lt;A href="https://customers.microsoft.com/en-us/story/744064-accenture-partner-professional-services-azure-bot-service" target="_blank" rel="noopener"&gt;Accenture built such a chatbot&lt;/A&gt; to help onboard new joiners in an organization who had the same common onboarding requests. Additionally, UNSW Sydney created a question chatbot to better engage with students and more quickly answers questions students might have. Other examples of common use cases include IT help desk password resets and customer service FAQs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_0-1588955231262.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/190176iD553C5B3FC4D1A6C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_0-1588955231262.png" alt="wmendoza_0-1588955231262.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Image of the UNSW Sydney&lt;/EM&gt; &lt;EM&gt;Question bot in Teams.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For this type of chatbot, you will need:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;QnA Maker&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Bot Service&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;QnA Maker is the easiest way to build a chatbot in Azure.&amp;nbsp; As described in the section above, QnA Maker will help you to quickly convert information in documents like FAQ pages and product manuals into a question and answer conversational experience. If you already have an FAQ document or page, you can build this experience in minutes within the QnA Maker portal, with the ability to answer common questions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Additionally, you can easily add a personality type to handle small talk to handle odd questions like “Who made you?” or “Where are you from?” in a tone that is consistent with your brand. &amp;nbsp;Once you have tested and re-trained the service, you can deploy QnA Maker to Azure Bot Service, and publish to Teams, Slack, or other popular channels.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="QnAMaker Chit-chat portal2.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191148iA8CD3B718BF963B3/image-size/large?v=v2&amp;amp;px=999" role="button" title="QnAMaker Chit-chat portal2.png" alt="QnAMaker Chit-chat portal2.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Image of QnAMaker.ai portal experience with chit-chat.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get Started: &lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base" target="_blank" rel="noopener"&gt;Quickstart guide&lt;/A&gt; to walk you through how to build the bot.&lt;/LI&gt;
&lt;LI&gt;If you prefer video tutorials, &lt;A href="https://www.youtube.com/watch?v=-2zzo2hHasQ" target="_blank" rel="noopener"&gt;here is a guided video&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Transactional support chatbot&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another common type of chatbot is one that can help customers not only self-serve with answers to frequently asked questions, but also to retrieve information such as the status of a package or update a record in a system such as an insurance plan. Again, rather than take up limited resources chasing down answers, organizations are automating this capability so customers can self-serve. Developers at Jet.com &lt;A href="https://customers.microsoft.com/en-us/story/jet-dot-com-retailers-azure" target="_blank" rel="noopener"&gt;built a customer service chatbot&lt;/A&gt; to help them scale to meet the growing customer service needs.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_1-1588955231282.png" style="width: 605px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/190175iEF4491E3A89485F0/image-dimensions/605x239?v=v2" width="605" height="239" role="button" title="wmendoza_1-1588955231282.png" alt="wmendoza_1-1588955231282.png" /&gt;&lt;/span&gt;&lt;BR /&gt;&lt;EM&gt;Example of a Jet.com bot interaction.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For this type of chatbot, you will need:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Bot Framework&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Bot Service&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Cognitive Services (e.g. Language Understanding)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These transactional, self-service chatbots can be built with the following components: Bot Framework, Azure Bot Service to build the chatbot engine and Azure Cognitive Services like Language Understanding to build the ability to understand utterances (e.g. inquiries, or requests from users) or QnA Maker to easily answer commonly asked questions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get started:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-sdk-quickstart?view=azure-bot-service-4.0" target="_blank" rel="noopener"&gt;Quick start guide&lt;/A&gt; to begin with Bot Framework SDK.&lt;/LI&gt;
&lt;LI&gt;For a more visual experience, use the &lt;A href="https://docs.microsoft.com/en-us/composer/introduction" target="_blank" rel="noopener"&gt;Bot Framework Composer Quick start guide&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;View the &lt;A title="Bot Frameework Composer" href="https://www.youtube.com/watch?v=P-kKw2HGP3o" target="_self"&gt;video tutorial on Bot Framework Composer&lt;/A&gt;&lt;BR /&gt;Note: Download Bot Framework Composer directly for&amp;nbsp;&lt;A href="https://aka.ms/bf-composer-download-win" target="_self"&gt;Windows&lt;/A&gt;, &lt;A href="https://aka.ms/bf-composer-download-mac" target="_self"&gt;Mac&lt;/A&gt;, or&amp;nbsp;&lt;A href="https://aka.ms/bf-composer-download-linux" target="_self"&gt;Linux&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BFComposer visual.jpg" style="width: 610px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191159iD4855D8A3D36E274/image-size/large?v=v2&amp;amp;px=999" role="button" title="BFComposer visual.jpg" alt="BFComposer visual.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Image of the Bot Framework Composer tool.&amp;nbsp;Download directly for&amp;nbsp;&lt;A href="https://aka.ms/bf-composer-download-win" target="_self"&gt;Windows&lt;/A&gt;, &lt;A href="https://aka.ms/bf-composer-download-mac" target="_self"&gt;Mac&lt;/A&gt;, or&amp;nbsp;&lt;A href="https://aka.ms/bf-composer-download-linux" target="_self"&gt;Linux.&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Branded virtual assistant&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, some developers want to build their own custom virtual assistant capable of delivering personalized experiences enabling users to make a wide range of inquiries or requests across many canvases and even supporting the use of voice. &lt;A href="https://customers.microsoft.com/en-us/story/laliga-media-entertainment-azure" target="_blank" rel="noopener"&gt;La Liga&lt;/A&gt; and &lt;A href="https://www.youtube.com/watch?v=IW6V7ND5qis" target="_blank" rel="noopener"&gt;Vodafone&lt;/A&gt; are a couple of examples of organizations that have built their own custom voice assistants to better engage with their fans or customers.&lt;/P&gt;
&lt;P&gt;&lt;A title="La Liga Assistant - how we built it" href="https://www.youtube.com/watch?v=oZwtUXzXUi4" target="_blank" rel="noopener"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="wmendoza_3-1588955231390.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/190177i8C520EEAFD473DAD/image-size/medium?v=v2&amp;amp;px=400" role="button" title="wmendoza_3-1588955231390.png" alt="wmendoza_3-1588955231390.png" /&gt;&lt;/span&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Click the above image for a video on how La Liga built their own virtual assistant.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;For this type of chatbot, you will need:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/overview/virtual-assistant-solution/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Virtual Assistant solution accelerator&lt;/STRONG&gt;&lt;/A&gt; (which brings together key Azure services required)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To help simplify the steps required to solve for this scenario, we have made the Virtual Assistant solution accelerator available in our Github repository. The Virtual Assistant solution accelerator is an Azure Resource Management (ARM) template that orchestrates the deployment of the core Azure services required to deploy a custom virtual assistant.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get Started: &lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Virtual Assistant solution accelerator template &lt;A href="https://microsoft.github.io/botframework-solutions/overview/virtual-assistant-template/" target="_blank" rel="noopener"&gt;available on Github&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/1-intro/" target="_blank" rel="noopener"&gt;Quick start guide&lt;/A&gt; for written tutorial&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://youtu.be/u7Gql-ClcVA?t=563" target="_blank" rel="noopener"&gt;Watch this video&lt;/A&gt; for additional guidance&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Bringing it all together&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In short, as customer inquiry volumes surge, pressure is being put on organizations to continue to deliver great customer experiences and solve the needs of their customers. To help meet this need, developers are building informational chatbots to answer FAQs, transactional chatbots to allow customers to self-serve, or even context-aware branded virtual assistants for improved customer experiences.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We hope this blog has helped outline how you can bring together Azure AI offerings such as Azure Bot Service, Bot Framework and Cognitive Services to build chatbots that better serve your customers and communities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Additional resources:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For more details on building your first chatbot, you can &lt;A href="https://azure.microsoft.com/en-us/resources/create-your-first-intelligent-bot-with-microsoft-ai/" target="_blank" rel="noopener"&gt;download this developer guide.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="ebook-create-your-first-intelligent-bot-with-Azure-ai.png" style="width: 350px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/191351iA8C8504EBEBEF8CB/image-size/large?v=v2&amp;amp;px=999" role="button" title="ebook-create-your-first-intelligent-bot-with-Azure-ai.png" alt="ebook-create-your-first-intelligent-bot-with-Azure-ai.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 May 2020 20:20:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-to-build-chatbots-that-deliver-better-customer-experiences/ba-p/1374659</guid>
      <dc:creator>wmendoza</dc:creator>
      <dc:date>2020-05-15T20:20:02Z</dc:date>
    </item>
    <item>
      <title>Finetune neural text-to-speech output  with advanced customization features</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/finetune-neural-text-to-speech-output-with-advanced/ba-p/1348941</link>
      <description>&lt;P&gt;&lt;EM&gt;This post was co-authored by&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/175688" target="_blank" rel="noopener"&gt;@Qinying Liao&lt;/A&gt;,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Yueying Liu, Sheng Zhao,&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/23979" target="_blank" rel="noopener"&gt;@Anny Dow&lt;/A&gt;&amp;nbsp;, Bohan Li and Jun-wei Gan&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt;&lt;SPAN&gt; (TTS)&lt;/SPAN&gt;&amp;nbsp;converts text to lifelike speech for more natural interfaces. With natural-sounding speech that matches the stress patterns and intonation of human voices, neural TTS significantly reduces listening fatigue when users are interacting with AI systems.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Common use cases for neural TTS include, but are not limited to, voice assistants, connected cars, smart-home devices, and various e-learning systems as well as reading apps. While neural TTS provides you a set of voices that already sound natural and human-like, you may still want to modify the speech properties to make voices better fit your scenario and context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A wide range of fine-tuning features are available through &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp" target="_blank" rel="noopener"&gt;Speech Synthesis Markup Language (SSML)&lt;/A&gt; and a code-free &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;Audio Content Creation&lt;/A&gt; tool for you to adapt TTS output, such as adding or removing a pause/break, changing the pronunciation, adjusting the speaking rate, volume, pitch and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this article, we’ll deep dive into the latest advanced features that can help you adapt the intonation and stress patterns of neural TTS output as well as define custom lexicon for your applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Control the prosody of your neural TTS output&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-prosody" target="_blank" rel="noopener"&gt;Prosody&lt;/A&gt;, as one of the SSML elements, can be used to specify changes to pitch, contour, range, rate, duration, and volume for the TTS output, making your audio result easier to follow.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are glad to share that the adjustments around contour, breaks/pauses and speaking rates of neural TTS are smoothly supported today. Now you can easily tailor the prosody of your TTS output using SSML or the Audio Content Creation tool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Pitch contour&lt;/H3&gt;
&lt;P&gt;Pitch contour represents changes in pitch at specified times in speech output. tuning the pitch contour, you can make the intonation of your synthesized output sound different. For example, you can use it to emphasize different parts of your sentence or change the tone to make it sound more natural.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Here are some examples of adjusting pitch contour with SSML.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE class=" lia-align-left" style="width: 750px;" width="750"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="365.455px"&gt;
&lt;P class="lia-align-justify"&gt;Original&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="383.636px"&gt;
&lt;P&gt;Tuned&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="365.455px"&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;FONT size="2"&gt;I never said he stole your money&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&amp;nbsp;&lt;/P&gt;
&lt;AUDIO style="background-image: url('img/object.gif');" controls="controls"&gt;
&lt;SOURCE src=" http://tts.blob.core.windows.net/blog/2020Aprilblog/pitch1.wav "&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="383.636px"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="2"&gt;&amp;lt;prosody contour="(11%, +65%) (60%, -43%) (80%, -34%)"&amp;gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;I never said he stole your money.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="2"&gt;&amp;lt;/prosody&amp;gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;AUDIO style="background-image: url('img/object.gif');" controls="controls"&gt;
&lt;SOURCE src=" http://tts.blob.core.windows.net/blog/2020Aprilblog/pitch1after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="365.455px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;That's how you pronounce it ?&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src=" http://tts.blob.core.windows.net/blog/2020Aprilblog/pitch2.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="383.636px"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="2"&gt;&amp;lt;prosody contour="(60%, -11%) (85%, +85%)"&amp;gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;That's how you pronounce it ?&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="2"&gt;&amp;lt;/prosody&amp;gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/pitch2after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Breaks/pauses&lt;/H3&gt;
&lt;P&gt;You can insert pauses (or breaks) between words or adjust pauses automatically added by the neural voices&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 750px;" width="750"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="363.636px"&gt;
&lt;P&gt;Original&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.455px"&gt;
&lt;P&gt;Tuned&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="363.636px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Now 50 years after the event, he may finally have an answer.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/Aria_break.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.455px"&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Now &lt;STRONG&gt;&amp;lt;break time="100ms" /&amp;gt;&lt;/STRONG&gt;50 years after the event, he may finally have an answer.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/Aria_break_after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_5" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="363.636px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2" color="#000000"&gt;通过语音合成技术，我们可以创造出不同风格的智能语音。&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/Break-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="385.455px"&gt;
&lt;P&gt;&lt;FONT size="2" color="#000000"&gt;通过语音合成技术，我们可以&lt;STRONG&gt;&amp;lt;mstts:ttsbreak strength="none" /&amp;gt;&lt;/STRONG&gt;创造出不同风格的智能语音。&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/Break-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Adjust rate&lt;/H3&gt;
&lt;P&gt;Rate indicates the speed at which text is read aloud. You can adjust the speed of a whole sentence or a part of a sentence read by neural voices.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 750px;" width="750"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="361.818px"&gt;
&lt;P&gt;Original&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="387.273px"&gt;
&lt;P&gt;Tune in SSML&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="361.818px"&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Sometimes somebody will bring something that you&amp;nbsp;really&amp;nbsp;like.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/rate-before.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="387.273px"&gt;
&lt;P&gt;&lt;FONT size="2"&gt;Sometimes somebody will bring something that you &lt;STRONG&gt;&amp;lt;prosody rate="-51.00%"&amp;gt;&lt;/STRONG&gt;really &lt;STRONG&gt;&amp;lt;/prosody&amp;gt;&lt;/STRONG&gt;like.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/rate-after.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Adjust neural voice prosodies through the audio content creation tool&lt;/H3&gt;
&lt;P&gt;Besides SSML, we also offer an easy-to-use &lt;A href="https://speech.microsoft.com/audiocontentcreation" target="_blank" rel="noopener"&gt;Audio Content Creation&lt;/A&gt; tool to help you fine-tune TTS output. Paste or upload your text in the audio content creation tool, specify the voice you want to use, and then adjust the voice parameters in the tuning panel. You can switch your view to check the SSML format generated along with your adjustments and use the SSML in your code, or generate audio directly from the tool for your further use.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See below for a demo showing how prosody is adjusted using the code-free tool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/mUvf2NbfuYU" align="center" size="large" width="600" height="450" uploading="false" thumbnail="https://i.ytimg.com/vi/mUvf2NbfuYU/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Define lexicon for your neural TTS output&lt;/H2&gt;
&lt;P&gt;Sometimes TTS does not pronounce words accurately in the way you want, such as&amp;nbsp;a company or person’s name. To improve pronunciation, you can define the reading of&amp;nbsp;these&amp;nbsp;entities&amp;nbsp;in SSML&amp;nbsp;using&amp;nbsp;the &amp;lt;&lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#use-phonemes-to-improve-pronunciation" target="_blank" rel="noopener"&gt;phoneme&lt;/A&gt;&amp;gt;&amp;nbsp;and&amp;nbsp;&amp;lt;sub&amp;gt; tags. However, defining multiple entities one by one during speech synthesis can be time-consuming. The new custom lexicon capability makes this process easier.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With &lt;A href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#use-custom-lexicon-to-improve-pronunciation" target="_blank" rel="noopener"&gt;custom lexicon&lt;/A&gt;, simply specify the reading of entities in a list stored as an .xml or .pls file, provide a web link for your list and refer to this list in SSML. The right pronunciation will be applied to all specified custom words &amp;nbsp;at once.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a sample:&lt;/P&gt;
&lt;P&gt;For your scenario, you may want to adjust the pronunciations of “BTW,” “Alki Beach” and “Jean” from the default TTS. Hear the differences in the samples below&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 1000px;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="348.182px"&gt;
&lt;P&gt;Script&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="190.909px"&gt;
&lt;P&gt;Default reading&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="210px"&gt;
&lt;P&gt;Applied custom lexicon&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="348.182px"&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;EM&gt;BTW&lt;/EM&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;EM&gt;, we will arrive &lt;STRONG&gt;Alki Beach&lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt; &lt;EM&gt;probably 8:00 tomorrow morning.&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2"&gt;&lt;SPAN&gt;&lt;EM&gt;Could you help leave a message to&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;STRONG&gt;&lt;EM&gt;Jean&lt;/EM&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;EM&gt; Pierre &lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;EM&gt;for me?&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="190.909px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/BeforeCustomLexicon.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="210px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/2020Aprilblog/WithCustomLexicon.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This is how the custom lexicon list is defined for the above sample:&lt;SPAN&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="html"&gt;&amp;lt;lexeme&amp;gt; 
    &amp;lt;grapheme&amp;gt;BTW&amp;lt;/grapheme&amp;gt;  
    &amp;lt;alias&amp;gt;By the way&amp;lt;/alias&amp;gt;  
&amp;lt;/lexeme&amp;gt; 
&amp;lt;lexeme&amp;gt; 
    &amp;lt;grapheme&amp;gt; Alki &amp;lt;/grapheme&amp;gt;  
    &amp;lt;phoneme&amp;gt; æl.kaɪˈ&amp;lt;/phoneme&amp;gt; 
&amp;lt;/lexeme&amp;gt; 
&amp;lt;lexeme&amp;gt; 
    &amp;lt;grapheme&amp;gt; Jean &amp;lt;/grapheme&amp;gt;  
    &amp;lt;phoneme alphabet="ipa" ph="ʒɑˈn"&amp;gt;Jean &amp;lt;/phoneme&amp;gt;
&amp;lt;/lexeme&amp;gt; 
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can upload the list online and put it in a data store like &lt;A href="https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-portal" target="_blank" rel="noopener"&gt;Azure Blob Storage&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;During speech synthesis, use below SSML to refer to the list and apply custom lexicon to the input text. Speech synthesis will then reflect your defined pronunciations in the output all at once.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="html"&gt;&amp;lt;lexicon uri="http://www.example.com/customlexicon.xml"/&amp;gt; 
BTW, we will arrive Alki beach probably 8:00 tomorrow morning. 
Could you help leave a message to Jean Pierre  for me? &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information about custom lexicon, please see our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#use-custom-lexicon-to-improve-pronunciation" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Get started&lt;/H2&gt;
&lt;P&gt;Since the &lt;A href="https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/" target="_blank" rel="noopener"&gt;release of our Neural TTS&lt;/A&gt; less than two years ago, this field has advanced rapidly. New research models including &lt;A href="https://arxiv.org/abs/1809.08895" target="_blank" rel="noopener"&gt;Transformer TTS&lt;/A&gt; and &lt;A href="https://www.microsoft.com/en-us/research/blog/fastspeech-new-text-to-speech-model-improves-on-speed-accuracy-and-controllability/" target="_blank" rel="noopener"&gt;FastSpeech&lt;/A&gt; have been proposed and improved the state of art. With these research innovations,&amp;nbsp;we’ve not only improved the controllability of the neural voice output, but also made the synthesized speech more robust and largely improved the performance of neural TTS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started with &lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Text to Speech on Azure&lt;/A&gt; today.&lt;/P&gt;</description>
      <pubDate>Fri, 01 May 2020 16:30:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/finetune-neural-text-to-speech-output-with-advanced/ba-p/1348941</guid>
      <dc:creator>Melinda Ma</dc:creator>
      <dc:date>2020-05-01T16:30:29Z</dc:date>
    </item>
    <item>
      <title>Running ML.NET + Notebooks in Azure Machine Learning Studio</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/running-ml-net-notebooks-in-azure-machine-learning-studio/ba-p/1323238</link>
      <description>&lt;P&gt;&lt;FONT size="7"&gt;Time Series Forecasting in ML.NET and Notebooks in Azure ML Studio&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;In this sample, learn how to run time series forecasting in a Jupyter notebook. We will read in data from a csv file, do some exploratory plots, fit a regression model, and fit a more sophisticated Singular Spectrum Analysis (SSA) forecaster.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Download the source code&lt;/H1&gt;
&lt;P&gt;&lt;A href="https://aka.ms/timeseries-mlnet" target="_blank" rel="noopener"&gt;Access the GitHub repo&lt;/A&gt; and copy the “clone” link in order to run this tutorial on your own machine.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Prerequisites&lt;/H2&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Install C# Kernel&lt;/H3&gt;
&lt;P&gt;Note: These instructions only apply if you intend to run this notebook in Azure Machine Learning. You can also run this notebook on your local machine by following &lt;A href="https://github.com/dotnet/interactive#how-to-install-net-interactive" target="_blank" rel="noopener"&gt;the instructions at the dotnet interactive GitHub repo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go to ml.azure.com. Select your subscription and machine learning workspace.&lt;/LI&gt;
&lt;LI&gt;Open up the "Notebooks" tab on the lefthand side of the page&lt;/LI&gt;
&lt;LI&gt;Create a compute instance if you have not already, or select an existing one from the dropdown menu.&lt;/LI&gt;
&lt;LI&gt;Open a notebook file with an extension of .ipynb&lt;/LI&gt;
&lt;LI&gt;Select the Terminal button at the top right.&lt;/LI&gt;
&lt;LI&gt;Follow &lt;A href="https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-ubuntu-1604" target="_blank" rel="noopener"&gt;the instructions here&lt;/A&gt; to register a Microsoft product key and install .NET Core 3.1.&lt;/LI&gt;
&lt;LI&gt;Install dotnet interactive by running dotnet tool install -g --add-source "&lt;A href="https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" target="_blank" rel="noopener"&gt;https://dotnet.myget.org/F/dotnet-try/api/v3/index.json&lt;/A&gt;" dotnet-interactive&lt;/LI&gt;
&lt;LI&gt;Create a symlink between the installed location of dotnet interactive and your local bin directory: sudo ln -s /home/azureuser/.dotnet/tools/dotnet-interactive /usr/local/bin/dotnet-interactive&lt;/LI&gt;
&lt;LI&gt;Set your dotnet root directory: export DOTNET_ROOT=$(dirname $(realpath $(which dotnet)))&lt;/LI&gt;
&lt;LI&gt;Install the jupyter kernel: dotnet interactive jupyter install&lt;/LI&gt;
&lt;LI&gt;Verify the installation by running jupyter kernelspec list. You should see ".net-fsharp" and ".net-csharp" listed as kernels.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Install Mkl on Ubuntu Linux&lt;/H3&gt;
&lt;P&gt;If you are running ML.NET for the first time on an Ubuntu Linux machine (like Azure Machine Learning notebooks), please &lt;A href="https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-extra-dependencies#linux" target="_blank" rel="noopener"&gt;follow these instructions&lt;/A&gt; to download the required dependencies.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Start visualizing data&lt;/H2&gt;
&lt;P&gt;Great! We’re now set up to run ML.NET in Azure ML Integrated Notebooks. Let’s begin by visualizing our data, &lt;A href="https://github.com/dotnet/interactive/blob/master/NotebookExamples/csharp/Docs/Plotting%20with%20Xplot.ipynb" target="_blank" rel="noopener"&gt;using the XPlot library.&lt;/A&gt; Notice how the data display a sinusoidal pattern, but there’s also a good amount of noise.&lt;/P&gt;
&lt;DIV id="tinyMceEditorgopalv_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorgopalv_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV id="tinyMceEditorgopalv_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="original-series.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185501iEFB38E965B9C2648/image-size/large?v=v2&amp;amp;px=999" role="button" title="original-series.png" alt="original-series.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Compute an engineered feature&lt;/H2&gt;
&lt;P&gt;As we mentioned, the data display a sinusoidal pattern, so let’s use that intuition to fit a regression model with an engineered feature. Specifically, let’s fit a model using a cosine function as our independent variable. Below, consider how well a cosine model can mimic the periodicity of our original model. The only things that are wrong are the distance between crests and troughs of each wave (the “amplitude”) and the y-intercept of the wave. Luckily, linear regression can give us these values.&lt;/P&gt;
&lt;DIV id="tinyMceEditorgopalv_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="original-series-cosine.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185502i8BE2D786BC3C80CA/image-size/large?v=v2&amp;amp;px=999" role="button" title="original-series-cosine.png" alt="original-series-cosine.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Fit a linear regression model&lt;/H2&gt;
&lt;P&gt;Let’s try fitting a model using our engineered features from the previous step. Because the input data are so nicely sinusoidal, this model actually works quite well. It has a Mean Absolute Error (MAE) of 1.997 and a Root Mean Squared Error (RMSE) of 2.574. Let’s see if we can do better.&lt;/P&gt;
&lt;DIV id="tinyMceEditorgopalv_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="series-with-regression.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185503i39E2C80046F050C8/image-size/large?v=v2&amp;amp;px=999" role="button" title="series-with-regression.png" alt="series-with-regression.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Use ML.NET’s SSA Forecasting Transformer&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.timeseriescatalog.forecastbyssa?view=ml-dotnet" target="_blank" rel="noopener"&gt;ML.NET’s SSAForecastingTransformer&lt;/A&gt; can fit a forecasting model on our original data, without our having to provide it with engineered features. Most of the required parameters are based on the amount of data you have and the amount of time in the future you expect to predict. The only tricky one is the “windowSize” parameter, which should be set to be twice the length of the maximum expected seasonality in the data. For example, if you have data that is collected once per day in an environment that shows both monthly and yearly seasonality, you should set windowSize to be twice the length of the year, or 730. &lt;A href="https://aka.ms/timeseries-mlnet" target="_blank" rel="noopener"&gt;See the example notebook&lt;/A&gt; for more details on the other parameters.&lt;/P&gt;
&lt;P&gt;Notice that the SSA Forecasting Transformer gives us not only a lower MAE and RMSE of 1.963 and 2.491, respectively, but also gives us 95% confidence bounds.&lt;/P&gt;
&lt;DIV id="tinyMceEditorgopalv_4" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="train-ssa.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185504iD429B7354616E574/image-size/large?v=v2&amp;amp;px=999" role="button" title="train-ssa.png" alt="train-ssa.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Predict future values&lt;/H2&gt;
&lt;P&gt;So we’ve found our model of interest, now let’s use it to predict the future! We can simply retrain the model on all of the data, and then use &lt;A href="https://docs.microsoft.com/dotnet/api/microsoft.ml.transforms.timeseries.predictionfunctionextensions.createtimeseriesengine?view=ml-dotnet" target="_blank" rel="noopener"&gt;CreateTimeSeriesEngine&lt;/A&gt; to get a predictor, and then call Predict() to predict points up to the horizon we specified during training.&lt;/P&gt;
&lt;DIV id="tinyMceEditorgopalv_5" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="predict-ssa.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185505i96B685C197AC1AD2/image-size/large?v=v2&amp;amp;px=999" role="button" title="predict-ssa.png" alt="predict-ssa.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Next steps&lt;/H2&gt;
&lt;P&gt;In this notebook, you learned how to do time series forecasting in ML.NET with Jupyter notebooks. We initially used linear regression with an engineered feature, but we were able to improve performance by relying on ML.NET's SSA forecaster.&lt;/P&gt;
&lt;P&gt;To learn more about C# and Jupyter Notebooks,&amp;nbsp;&lt;A href="https://github.com/dotnet/interactive#how-to-install-net-interactive" target="_blank" rel="noopener"&gt;check out this GitHub repo&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;To see another example of using ML.NET in Jupyter,&amp;nbsp;&lt;A href="https://devblogs.microsoft.com/cesardelatorre/using-ml-net-in-jupyter-notebooks/" target="_blank" rel="noopener"&gt;check out this blog&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;To learn about using DataFrames in C#,&amp;nbsp;&lt;A href="https://devblogs.microsoft.com/dotnet/an-introduction-to-dataframe/" target="_blank" rel="noopener"&gt;check out this blog&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;To get started with Model Builder in Visual Studio,&amp;nbsp;&lt;A href="https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro" target="_blank" rel="noopener"&gt;try this getting started tutorial&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2020 20:20:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/running-ml-net-notebooks-in-azure-machine-learning-studio/ba-p/1323238</guid>
      <dc:creator>gopalv</dc:creator>
      <dc:date>2020-04-22T20:20:49Z</dc:date>
    </item>
    <item>
      <title>Open-Source Repository of Forecasting Best Practices for Accelerating Solution Development</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/open-source-repository-of-forecasting-best-practices-for/ba-p/1298941</link>
      <description>&lt;P&gt;&lt;EM&gt;Chenhui Hu, Vanja Paunic, Hong Ooi, Tao Wu, Wee Hyong Tok&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Time series forecasting is one of the most important topics in data science. &lt;SPAN&gt;Imagine that &lt;/SPAN&gt;&lt;SPAN&gt;you are a business owner, you might want to &lt;/SPAN&gt;&lt;SPAN&gt;predict different sorts of future events to make better decisions and &lt;/SPAN&gt;&lt;SPAN&gt;optimize your resource allocation&lt;/SPAN&gt;&lt;SPAN&gt;. &lt;/SPAN&gt;&lt;SPAN&gt;Typical &lt;/SPAN&gt;examples of time series forecasting use cases are retail sales forecasting, package shipment delay forecasting, energy demand forecasting, and financial forecasting.&amp;nbsp;As you can see, forecasting is everywhere! Given its ubiquitous nature and wide-ranging business applications, we have developed an open-source &lt;A href="https://github.com/microsoft/forecasting" target="_blank" rel="noopener"&gt;forecasting repo&lt;/A&gt; that puts world-class models and forecasting best practices in the hands of data scientists and industry experts – i.e., you!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="data_split_and_forecasts.gif" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/184308i4096D02FE3C2E623/image-size/large?v=v2&amp;amp;px=999" role="button" title="data_split_and_forecasts.gif" alt="data_split_and_forecasts.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;Figure 1: Visualization of training and testing iterations of a sales forecasting scenario using LightGBM model&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Forecasting Best Practices and Solution Accelerators&lt;/H2&gt;
&lt;P&gt;This repository provides examples of building forecasting solutions presented as Python Jupyter notebooks, R markdown files, and a library of utility functions. Our goal is to help you as a data scientist or machine learning engineer with varying levels of knowledge in forecasting&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Learn best practices for the development of forecasting solutions in a variety of languages.&lt;/LI&gt;
&lt;LI&gt;Leverage recent advances in forecasting algorithms to build high-performance solutions and operationalize them.&lt;/LI&gt;
&lt;LI&gt;Accelerate the solution development process for real-world forecasting problems. With the provided examples, you will be able to significantly reduce the “time to market” by simplifying the experience from defining the business problem to the development of solutions by orders of magnitude.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In the repository, you will find state-of-the-art (SOAT) forecasting models using traditional machine learning and deep learning approaches. Implementations of SOTA models in this release are centered around retail sales forecasting and are written in Python and R, two of the most popular programming languages in the forecasting domain. To enable high-throughput forecasting scenarios, we have included notebooks for forecasting multiple time series with distributed training techniques such as Ray in Python, the parallel package in R, and multi-threading in LightGBM. The following is a quick summary of forecasting models covered in this repository.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;STRONG&gt;Model&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/00_quick_start/autoarima_single_round.ipynb" target="_blank" rel="noopener"&gt;Auto ARIMA&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Auto Regressive Integrated Moving Average (ARIMA) model that is automatically selected&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/00_quick_start/azure_automl_single_round.ipynb" target="_blank" rel="noopener"&gt;Linear Regression&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Linear regression model trained on lagged features of the target variable and external features&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/00_quick_start/lightgbm_single_round.ipynb" target="_blank" rel="noopener"&gt;LightGBM&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Gradient boosting decision tree implemented with LightGBM package for high accuracy and fast speed&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/02_model/dilatedcnn_multi_round.ipynb" target="_blank" rel="noopener"&gt;DilatedCNN&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Dilated Convolutional Neural Network that captures long-range temporal flow with dilated causal connections&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/R/02_basic_models.Rmd" target="_blank" rel="noopener"&gt;Mean Forecast&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;R&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Simple forecasting method based on historical mean&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/R/02a_reg_models.Rmd" target="_blank" rel="noopener"&gt;ARIMA&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;R&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;ARIMA model without or with external features&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/R/02_basic_models.Rmd" target="_blank" rel="noopener"&gt;ETS&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;R&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Exponential Smoothing algorithm with additive errors&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/R/02b_prophet_models.Rmd" target="_blank" rel="noopener"&gt;Prophet&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;R&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Automated forecasting procedure based on an additive model with non-linear trends and Tidyverts framework&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The repository also comes with Azure Machine Learning (Azure ML) themed notebooks and best practices recipes to accelerate the development of scalable, production-grade forecasting solutions on Azure. You will find the following examples for forecasting with Azure AutoML as well as tuning and deploying a forecasting model on Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;STRONG&gt;Method&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;&lt;STRONG&gt;Language&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/00_quick_start/azure_automl_single_round.ipynb" target="_blank" rel="noopener"&gt;Azure AutoML&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Azure ML service that automates model development process and identifies the best machine learning pipeline&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/03_model_tune_deploy/azure_hyperdrive_lightgbm.ipynb" target="_blank" rel="noopener"&gt;HyperDrive&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Azure ML service for tuning hyperparameters of machine learning models in parallel on cloud&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="144"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/forecasting/blob/master/examples/grocery_sales/python/03_model_tune_deploy/azure_hyperdrive_lightgbm.ipynb" target="_blank" rel="noopener"&gt;Azure ML Web Service&lt;/A&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="114"&gt;
&lt;P&gt;Python&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="366"&gt;
&lt;P&gt;Azure ML service for deploying a model as a web service on Azure Container Instance&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Developing an accurate forecasting solution can be a complex and time-consuming process. We hope the forecasting repo will help shorten your development cycle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;To Learn More and Contribute&lt;/H2&gt;
&lt;P&gt;For more information, please visit: &lt;A href="https://github.com/microsoft/forecasting" target="_blank" rel="noopener"&gt;https://github.com/microsoft/forecasting&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Contributions from open-source community are always welcome! Please feel free to check our &lt;A href="https://github.com/microsoft/forecasting/blob/master/CONTRIBUTING.md" target="_blank" rel="noopener"&gt;contribution guide&lt;/A&gt; if you would like to contribute to the content and bring in the latest SOTA algorithms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 14 Apr 2020 23:30:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/open-source-repository-of-forecasting-best-practices-for/ba-p/1298941</guid>
      <dc:creator>chenhuihu</dc:creator>
      <dc:date>2020-04-14T23:30:20Z</dc:date>
    </item>
    <item>
      <title>Deploying your COVID-19 Healthcare Bot - Everything you need to get started</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/deploying-your-covid-19-healthcare-bot-everything-you-need-to/ba-p/1279562</link>
      <description>&lt;P&gt;Public health organizations, healthcare providers, hospitals and others on the frontline of COVID-19 response have had to act quickly to support the sudden spike in inquiries from patients and constituents looking to get answers to a common set of requests such as up-to-date outbreak information, assess symptoms and risk factors for people worried about infection, and suggest a next course of action. Many of these organizations have expressed concerns with being able to support the volumes of inquiries, and consequently have been using the Microsoft Healthcare Bot to help provide critical information to their patients.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-hcb.microsofthealthcarebot" target="_blank" rel="noopener"&gt;Microsoft’s Healthcare Bot &lt;/A&gt;&amp;nbsp;is a scalable Azure-based SaaS solution that empowers Microsoft customers and partners to build and deploy compliant, AI-powered health agents, allowing them to offer their users intelligent, personalized access to health-related information and interactions through a natural conversation experience. It is one solution that uses AI to help the CDC and other frontline organizations to provide &lt;SPAN&gt;help to those&lt;/SPAN&gt; who need it.&lt;/P&gt;
&lt;P&gt;The Healthcare &lt;SPAN&gt;B&lt;/SPAN&gt;ot can easily be customized to suit an organizations scenarios and protocols. To assist in the rapid deployment of COVID-19 specific bots Microsoft has made available a set of COVID-19 templates that customers can use and modify:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;COVID-19 Risk Assessment&lt;/LI&gt;
&lt;LI&gt;COVID-19 Frequently Asked Questions&lt;/LI&gt;
&lt;LI&gt;COVID-19 Worldwide metrics&lt;/LI&gt;
&lt;LI&gt;COVID-19 Clinical Triage&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-center"&gt;&lt;STRONG&gt;Deployment of COVID-19 Healthcare Bot in your environment&amp;nbsp;&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;To help you deploy your COVID-19 healthcare bot, Microsoft has created a Reference architecture, deployment template and supporting &lt;SPAN style="font-style: normal !msorm;"&gt;&lt;EM&gt;How to&lt;/EM&gt;&lt;/SPAN&gt; videos and guides.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Reference Architecture &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P data-unlink="true"&gt;The reference architecture&amp;nbsp;provides guidance on a High Availability deployment of the Healthcare Bot and associated Azure services across 2 regions.&lt;/P&gt;
&lt;P&gt;Note: The architecture can also be deployed in a single region, if you choose to deploy in a single region it is recommended that you model and estimate your peak traffic expectations to ensure that a single region deployment is appropriate for your situation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="HealthBotRefArch.PNG" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/181985iEC0F7B8AA6BB0FFC/image-size/large?v=v2&amp;amp;px=999" role="button" title="HealthBotRefArch.PNG" alt="HealthBotRefArch.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Deployment Template&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;To assist in deploying the reference architecture we have developed an &lt;SPAN&gt;ARM template&lt;/SPAN&gt; for you to use. The step by step instruction to deploy and configure the reference architecture can be found here:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/HealthBotRefArchDeploy#a-deploy-the-arm-template" target="_blank" rel="noopener"&gt;Deploy &lt;SPAN&gt;Microsoft Health Bot Reference Architecture&lt;/SPAN&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To then set up your Health Bot – follow the instruction in the &lt;A href="https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/updated-quick-start-setting-up-your-covid-19-health-bot/ba-p/1230537" target="_blank" rel="noopener"&gt;Quick Start: Setting Up Your COVID-19 Health Bot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;If you are ready to deploy and would like assistance: &amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Contact your &lt;/SPAN&gt;account&lt;SPAN&gt; team for a quick demo and/ or alignment of resources&lt;/SPAN&gt;&lt;SPAN&gt;. &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;Speak to one of our &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fproject%2Fhealth-bot%2F%23!partners&amp;amp;data=02%7C01%7Cv-ramca%40microsoft.com%7Cdf09807dec744c55247808d7d1a45a82%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637208376180753152&amp;amp;sdata=y0UOm2H7xySqFIiIwhYUZ8dRyufIKCVryq2udV88msI%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Health Bot Partners&lt;/A&gt; who can help you deploy and customize your own COVID-19 Health &lt;SPAN&gt;B&lt;/SPAN&gt;ot.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 class="lia-align-center"&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2 class="lia-align-center"&gt;Additional Resources&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/updated-quick-start-setting-up-your-covid-19-health-bot/ba-p/1230537" target="_blank" rel="noopener"&gt;Quick Start: Setting Up Your COVID-19 Health Bot&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/video-series-microsoft-healthcare-bot-service-for-covid-19/ba-p/1266691" target="_blank" rel="noopener"&gt;Video Series – Microsoft Healthcare Bot Service for COVID-19: Getting Started&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/research/project/health-bot/" target="_blank" rel="noopener"&gt;Demo Bot (Not COVID-19 Specific).&lt;/A&gt; &amp;nbsp;Click on: Try a demo of an example&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/healthbot/#what-is-the-microsoft-health-bot-service" target="_blank" rel="noopener"&gt;Healthcare Bot Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://blogs.microsoft.com/blog/2020/03/20/delivering-information-and-eliminating-bottlenecks-with-cdcs-covid-19-assessment-bot/" target="_blank" rel="noopener"&gt;Blog: Delivering Information and eliminating bottlenecks with CDC’s COVID-19 assessment bot&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks for reading and let us know how else we can help!&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MarkPerry_LinkedIn.jfif" style="width: 200px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/181988i62CAB08613873445/image-size/small?v=v2&amp;amp;px=200" role="button" title="MarkPerry_LinkedIn.jfif" alt="MarkPerry_LinkedIn.jfif" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://www.linkedin.com/in/markperrymicrosoft/" target="_blank" rel="noopener"&gt;Mark Perry&lt;/A&gt;, Microsoft Director - Customer Success&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GR-01.jpg" style="width: 187px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/185808iCAAEA708D0BB036A/image-size/small?v=v2&amp;amp;px=200" role="button" title="GR-01.jpg" alt="GR-01.jpg" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://www.linkedin.com/in/ganesh-radhakrishnan-11217329/" target="_blank" rel="noopener"&gt;Ganesh Radhakrishnan&lt;/A&gt;, Microsoft Cloud Solution Architect&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2020 03:33:05 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/deploying-your-covid-19-healthcare-bot-everything-you-need-to/ba-p/1279562</guid>
      <dc:creator>Ganesh-R</dc:creator>
      <dc:date>2020-04-22T03:33:05Z</dc:date>
    </item>
    <item>
      <title>Introducing new voice styles in Azure Cognitive Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/introducing-new-voice-styles-in-azure-cognitive-services/ba-p/1248368</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This post was co-authored by &lt;LI-USER uid="175688"&gt;&lt;/LI-USER&gt;, &lt;LI-USER uid="23979"&gt;&lt;/LI-USER&gt;&amp;nbsp;, Yueying Liu, and Peter Pan. &amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Neural TTS enables fluid, natural-sounding speech that matches the patterns and intonation of human voices, helping developers bring their solutions to life.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today, we’re building upon our Neural Text to Speech (Neural TTS) capabilities in Azure Cognitive Services with new voice styles. With the new styles—newscast, customer service, and digital assistant—developers can tailor the voice of their apps and services to fit their brand or unique scenario.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Built on a powerful base model, our neural TTS voices are very natural, reliable, and expressive. Through transfer learning, the neural TTS model can learn different speaking styles from various speakers, enabling nuanced voices.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In addition to our new voice styles optimized for specific scenarios, we are also releasing new emotion styles. These styles allow you to adjust voices to express different emotions to fit the context, like cheerfulness or empathy. Let’s dive in.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Introducing Newscast, Customer Service, and Digital Assistant styles&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Newscast&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With neural TTS voices in the newscast style, your users can enjoy listening to news or articles in a professional tone that reflects what you might hear on TV or radio newscasts.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear Aria's (English – Female) and Xiaoxiao’s (Chinese – Female) voices in the &lt;EM&gt;newscast&lt;/EM&gt; style:&lt;/P&gt;
&lt;TABLE style="width: 100%;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="442.727px" height="30px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="150.909px" height="30px"&gt;
&lt;P&gt;Newscast style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="155.455px" height="30px"&gt;
&lt;P&gt;Default&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="442.727px" height="139px"&gt;
&lt;P&gt;&lt;EM&gt;Heavy snow and strong winds hammered parts of the central U.S. on Thursday and began moving into the Great Lakes region, knocking out power to tens of thousands of people and creating hazardous travel conditions a day after pummeling Colorado.&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="150.909px" height="139px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/newscast.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="155.455px" height="139px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_1" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/default_n.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="442.727px" height="111px"&gt;
&lt;P&gt;现今，大批企业以数字化转型为战略目标，数字化转型可赋能企业重构竞争环境、满足客户期望、增强服务运营。为了真正实现“ being digital ”, 许多企业将人工智能视作实现数字化转型目标的首选技术工具之一。&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="150.909px" height="111px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Newscast/Newscast.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="155.455px" height="111px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Newscast/Newscast-General.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Check out the newscast style in the Bing mobile app. When you search news with the voice search feature, you can hear news briefs using Aria’s newscast style voice.&lt;/P&gt;
&lt;P&gt;&lt;LI-VIDEO vid="https://youtu.be/RuBnd4RO2LE" align="center" size="small" width="200" height="150" uploading="false" thumbnail="https://i.ytimg.com/vi/RuBnd4RO2LE/hqdefault.jpg" external="url"&gt;&lt;/LI-VIDEO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;You can also check out Xiaoxiao’s newscast style voice, which has been adopted in WeChat through the Microsoft Listening Docs app. In Microsoft Listening Docs, users can hear Xiaoxiao’s voice read out multiple document types such as Word, PowerPoint, Excel, as well as images. Users can easily generate audio content for online trainings, news podcasts and more, and share with their social circles.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Customer Service&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The customer service style features a friendly and engaging tone and is suitable for scenarios involving customer support, such as an individual checking into their flight, making a restaurant reservation, or reporting a claim.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear Aria's and Xiaoxiao’s voices in the &lt;EM&gt;customer service&lt;/EM&gt; style:&lt;/P&gt;
&lt;TABLE class=" lia-align-left" style="width: 100%;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="378.182px" height="57px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="196.364px" height="57px"&gt;
&lt;P&gt;Customer Service style&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174.545px"&gt;
&lt;P&gt;Default&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="378.182px" height="139px"&gt;
&lt;P&gt;&lt;EM&gt;Alright, it's going to be right in front of your door, within 30 minutes. Thanks for calling &amp;nbsp;Pizza Loco! &lt;/EM&gt;&lt;EM&gt;Have a great night!&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="196.364px" height="139px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/CustomerService.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_0" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174.545px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/default_cs.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="378.182px" height="275px"&gt;
&lt;P&gt;客服：您好，欢迎致电智慧银行，我是您的智能客服晓晓，请问有什么可以帮您？&lt;/P&gt;
&lt;P&gt;客户：你好，我想调整信用卡的额度。&lt;/P&gt;
&lt;P&gt;客服：嗯，请稍等，我查询一下状态。请问您要调整到多少额度？&lt;/P&gt;
&lt;P&gt;客户：帮我调到三万人民币吧。&lt;/P&gt;
&lt;P&gt;客服：好的，已经给您变更成功，稍后您会收到短信提醒。&lt;/P&gt;
&lt;P&gt;客户：好的，谢谢。&lt;/P&gt;
&lt;P&gt;客服：感谢您的来电，祝您生活愉快，再见。&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="196.364px" height="275px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/CustomerService/CustomerService.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_2" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="174.545px" style="width: 200.909px; vertical-align: middle;"&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/CustomerService/CustomerService-General.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Digital&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt;Assistant&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Many customers have been using neural TTS voices for their digital assistant solutions. We are introducing two styles in this area: a chat style for more casual, conversational bots, and a more professional style for scenarios such as in-car digital assistants.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;EM&gt;chat&lt;/EM&gt; style features a conversational tone, simulating casual dialogue.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear Aria’s voice in the &lt;EM&gt;chat &lt;/EM&gt;style:&lt;/P&gt;
&lt;TABLE style="width: 100%;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="117.273px"&gt;
&lt;P&gt;Style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="289.091px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110.909px"&gt;
&lt;P&gt;Chat style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102.727px"&gt;
&lt;P&gt;Default&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="117.273px"&gt;
&lt;P&gt;Chat&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="289.091px"&gt;
&lt;P&gt;&lt;EM&gt;Oh, well that's quite a change from California to Utah&lt;/EM&gt;.&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="110.909px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_6" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/chat.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="102.727px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_7" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/default_c.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;EM&gt;assistant&lt;/EM&gt; style features a friendly and helpful tone, which is suitable in scenarios such as smart speakers or in-car assistants. Use the digital assistant voice to hear the weather forecast, search for information, navigate directions, set reminders, and more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear Xiaoxiao’s voice in the &lt;EM&gt;assistant&lt;/EM&gt; style:&lt;/P&gt;
&lt;TABLE style="width: 100%;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="454.545px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137.273px"&gt;
&lt;P&gt;Assistant style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="157.273px"&gt;
&lt;P&gt;Default&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="454.545px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;没听到你说话，请再说一次。&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137.273px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Assistant/Assistant-1.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="157.273px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Assistant/Assistant-General-1.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="454.545px"&gt;
&lt;P&gt;现在听的是：FM88.8&lt;SPAN&gt;，江苏音乐台的节目，滴滴叭叭早上好。&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="137.273px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Assistant/Assistant.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="157.273px"&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Assistant/Assistant-General.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bringing new emotions to Neural Text to Speech&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To enable you to build nuanced voices for your unique scenario, Neural Text to Speech also offers different emotion styles. You can access &lt;EM&gt;cheerful&lt;/EM&gt; and &lt;EM&gt;empathetic&lt;/EM&gt; styles for Aria’s voice, &lt;EM&gt;lyrical&lt;/EM&gt; style for Xiaoxiao’s voice—which sounds heartfelt and is optimized to read prose or poetry, and &lt;EM&gt;cheerful&lt;/EM&gt; style for Francisca’s voice (Brazilian Portuguese).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hear the new styles below:&lt;/P&gt;
&lt;TABLE style="width: 100%;"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="107.273px"&gt;
&lt;P&gt;Style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="264.545px"&gt;
&lt;P&gt;Text&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="132.727px"&gt;
&lt;P&gt;Style&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109.091px"&gt;
&lt;P&gt;Default&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD rowspan="2" width="107.273px"&gt;
&lt;P&gt;Cheerful&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="264.545px"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;G&lt;/EM&gt;&lt;EM&gt;reat, I hope she will like it!&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="132.727px"&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/cheerful.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;TD width="109.091px"&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/default_cheer.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="264.545px"&gt;
&lt;P&gt;&lt;EM&gt;A canadense postou uma música nova no seu perfil oficial do Twitter.&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="132.727px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_12" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/TTS-Francisca_cheerful-Waves-VoiceAssistant-00025.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109.091px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_13" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/TTS-Francisca_neutral-Waves-VoiceAssistant-00025.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107.273px"&gt;
&lt;P&gt;Empathetic&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="264.545px"&gt;
&lt;P&gt;&lt;EM&gt;I want to let you know that you’re loved. I know things are hard right now and it’s OK. You don’t have to do this alone&lt;/EM&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="132.727px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_14" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/empathy.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109.091px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_15" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/default_e.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="107.273px"&gt;
&lt;P&gt;Lyrical&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="264.545px"&gt;
&lt;P&gt;大家晚上好，我是晓晓。在每一个夜晚来临的时候，我都在这里陪你入睡。忙碌的一天又过去了，现在的你是窝在沙发上看着窗外发呆，还是倒了一杯咖啡继续解决白天没有做完的工作呢？时间过得真快呀，在学校里咬着早餐上课，和同学们嬉戏打闹的日子，仿佛就在昨天。但一转眼，我们都穿着西装变成了大人。&amp;nbsp;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="132.727px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_16" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Sentiment/Sentiment.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="109.091px"&gt;
&lt;DIV id="tinyMceEditorMelinda Ma_17" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="http://tts.blob.core.windows.net/blog/multistyle%20blog/zhCN/Sentiment/Sentiment-GD.wav"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These new voice styles are also available for customized brand voices through our &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt; capability, allowing you to build a unique voice that can also benefit from our new scenario and emotion styles. As part of Microsoft's commitment to designing AI responsibly, we have developed guidelines for customers in using Custom Neural Voice, in alignment with Microsoft's&amp;nbsp;&lt;A href="https://www.microsoft.com/AI/our-approach-to-ai" target="_blank" rel="noopener"&gt;principles for responsible innovation in AI.&lt;/A&gt; Learn more about the process for getting started with Custom Neural Voice &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/concepts-gating-overview" target="_blank" rel="noopener"&gt;here&lt;/A&gt;. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get Started&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Get started with the new neural TTS voice styles available in Azure Cognitive Services. Check out our &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles" target="_blank" rel="noopener"&gt;documentation&lt;/A&gt; to learn more.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Apr 2020 22:34:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/introducing-new-voice-styles-in-azure-cognitive-services/ba-p/1248368</guid>
      <dc:creator>Melinda Ma</dc:creator>
      <dc:date>2020-04-02T22:34:35Z</dc:date>
    </item>
    <item>
      <title>Cognitive Services adds Brazilian Portuguese to Neural Text to Speech</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/cognitive-services-adds-brazilian-portuguese-to-neural-text-to/ba-p/1210471</link>
      <description>&lt;H1&gt;Cognitive Services adds Brazilian Portuguese to Neural Text to Speech&lt;/H1&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This post was co-authored by Sheng Zhao, Anny Dow, Edward Un, Yueying Liu, Garfield He and Yang Zheng. &amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;AUDIO controls="controls"&gt;
&lt;SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Audio-PTBRblog.mp3"&gt;&lt;/SOURCE&gt;&lt;/AUDIO&gt;&lt;/P&gt;
&lt;P&gt;(voiced by Neural TTS)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/" target="_blank" rel="noopener"&gt;Neural Text to Speech&lt;/A&gt; (Neural TTS) converts text to lifelike speech for more natural interfaces. With natural-sounding speech that matches the stress patterns and intonation of human voices, neural TTS significantly reduces listening fatigue when users are interacting with AI systems, enabling scenarios from audiobooks to voice assistants.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Brazilian Portuguese neural voice now available&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;We’re excited to share that we are expanding our available neural TTS voices with Francisca, our new Brazilian Portuguese (pt-BR) voice. Francisca features the same human-like natural prosody of the other &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;neural TTS voices on Azure&amp;nbsp;&lt;/A&gt;— Guy (American English Male), Jessa (American English Female), Katja (German Female), Elsa (Italian Female), and Xiaoxiao (Mandarin Chinese Female).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With a powerful base model created using a large volume of speech samples, we were able to build Francisca’s voice from much less training data than it would require otherwise. The neural TTS base model learns different speaking styles from multiple speakers, and through transfer learning, can easily adapt its style to a target speaker. Like other neural voices, Francisca can generate realistic speech waveforms for a given text input, matching the patterns of stress and intonation transitions in spoken language seamlessly.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Besides the capability to synthesize speech, developers can also tailor the voice for different scenarios with different voice styles using the neural TTS. For example, the new pt-BR voice can also speak with a “cheerful” tone. The “cheerful” style can be used to express an emotion that is positive and happy. This is particularly useful in chat bot scenarios. You can adjust the speaking styles easily with &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp#adjust-speaking-styles" target="_blank" rel="noopener"&gt;the &amp;lt;mstts:express-as&amp;gt; element in SSML&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We conducted MOS (Mean Opinion Score) studies to evaluate the naturalness of Francisca. In a crowd-sourcing test with more than 60 native speakers, we examined 30 audios produced by Francisca in the neutral style and another 30 in the cheerful style. Overall impressions were rated on a 1-5 Likert scale, with naturalness in rhythm variations, pitch variations, stresses, pauses, and intelligibility considered. Human speech and a pt-BR voice from another cloud service provider (company X) were used as benchmarks. Results showed very positive feedback on Francisca in both the neutral (4.44) and cheerful (4.38) styles.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_3" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P style="text-align: center; margin: 9.0pt 0in 9.0pt 0in;" align="center"&gt;&lt;I&gt;&lt;SPAN style="font-size: 11.0pt; font-family: 'Segoe UI',sans-serif;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Pt-br-MOS.png" style="width: 752px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/175359i1EDB72536D0312AF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Pt-br-MOS.png" alt="Figure 1. MOS comparison of Francisca with human speech and company X" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 1. MOS comparison of Francisca with human speech and company X&lt;/span&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;Hear what Francisca sounds like.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Francisca_neutral-Waves-VoiceAssistant-00025.wav" target="_blank" rel="noopener"&gt;&lt;EM&gt;Example 1: Francisca (neutral)&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Francisca_cheerful-Waves-VoiceAssistant-00009.wav" target="_blank" rel="noopener"&gt;&lt;EM&gt;Example 2: Francisca (cheerful)&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;High fidelity and controllable output &lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Like other neural voices, Francisca is created using 24khz sampling rate. You can maximize the fidelity of neural voice outputs with 24khz related formats:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;raw-24khz-16bit-mono-pcm&lt;/LI&gt;
&lt;LI&gt;riff-24khz-16bit-mono-pcm&lt;/LI&gt;
&lt;LI&gt;audio-24khz-160kbitrate-mono-mp3&lt;/LI&gt;
&lt;LI&gt;audio-24khz-96kbitrate-mono-mp3&lt;/LI&gt;
&lt;LI&gt;audio-24khz-48kbitrate-mono-mp3&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For scenarios where lower sampling rate is required, for example playing back for phone calls, Francisca and other neural voices can also be easily sampled down with a lower bit rate. Learn more about the &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-text-to-speech#audio-outputs" target="_blank" rel="noopener"&gt;output format supported&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Hear text aloud with Read Aloud in the new Edge browser&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Neural TTS is powering Microsoft services at scale. The Francisca voice is now supported in the new &lt;A href="https://blogs.windows.com/msedgedev/2020/01/15/upgrading-new-microsoft-edge-79-chromium/" target="_blank" rel="noopener"&gt;Microsoft Edge&lt;/A&gt;, enabling you to anytime, anywhere with natural voices.&lt;/P&gt;
&lt;DIV id="tinyMceEditorQinying Liao_5" class="mceNonEditable lia-copypaste-placeholder"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="edge-readaloud.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/175357i43C5C421805192F5/image-size/large?v=v2&amp;amp;px=999" role="button" title="edge-readaloud.png" alt="Figure 2. Neural TTS in Edge Read-Aloud" /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 2. Neural TTS in Edge Read-Aloud&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;Edge Read Aloud also makes it easy to follow along with text, supporting the output of word boundaries so each word being read out is simultaneously highlighted in the UI. This is an essential feature for immersive reading scenarios. To build your own Read Aloud apps, check out &lt;/SPAN&gt;&lt;A style="font-family: inherit; background-color: #ffffff;" href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/1d1b13a7ab154cb884c9db1be3f22a0a8a876301/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs" target="_self"&gt;SynthesisWordBoundaryEventAsync&lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt; function in our sample codes.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Create a custom voice in Brazilian Portuguese&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;The same transfer learning technology is now shipped in the the &lt;A href="https://speech.microsoft.com/customvoice" target="_blank" rel="noopener"&gt;Custom Neural Voice&lt;/A&gt;&amp;nbsp;capability, enabling organizations to create their one-of-a-kind digital voices with 5X less data while still delivering high-fidelity audio outputs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Brazilian Portuguese (pt-BR) added to the family, seven locales are now supported in the custom neural voice online training portal - American English (en-US), British English (en-UK), Indian English (en-IN), German, French, Chinese (zh-CN) and Brazilian Portuguese (pt-BR). More locales are available through customer engagement. &lt;SPAN&gt;&lt;A href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR3f_-mitwQlFp-aY9u7mCfFUQjJSQ09NMkY1QVRDTU4yNjRUVzBEREVGVCQlQCN0PWcu" target="_blank" rel="noopener"&gt;Submit a request to create your custom voice&lt;/A&gt; using the neural TTS technology.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Get started&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these updates, we’re excited to be powering natural and intuitive voice experiences. Text to Speech has more than &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#standard-voices" target="_blank" rel="noopener"&gt;75 standard voices in over 45 languages&lt;/A&gt; and locales in addition to our growing list of &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#neural-voices" target="_blank" rel="noopener"&gt;neural voices&lt;/A&gt;. Learn more about &lt;A href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/index-text-to-speech" target="_blank" rel="noopener"&gt;how you can get started&lt;/A&gt;.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Apr 2020 13:51:51 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/cognitive-services-adds-brazilian-portuguese-to-neural-text-to/ba-p/1210471</guid>
      <dc:creator>Qinying Liao</dc:creator>
      <dc:date>2020-04-02T13:51:51Z</dc:date>
    </item>
    <item>
      <title>How BERT is integrated into Azure automated machine learning</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/ba-p/1194657</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;We’re introducing the &lt;A href="https://arxiv.org/abs/1810.04805" target="_self"&gt;BERT&lt;/A&gt;&amp;nbsp;deep learning architecture for text data to &lt;A href="https://azure.microsoft.com/services/machine-learning/automatedml/" target="_self"&gt;Azure Automated ML&lt;/A&gt;. This model usually performs much better than older machine learning techniques that rely on &lt;A href="https://en.wikipedia.org/wiki/Bag-of-words_model" target="_self"&gt;bag of words&lt;/A&gt;-style features for text classification.&amp;nbsp; BERT, which is both a neural net architecture and a particular &lt;A href="https://ruder.io/transfer-learning/" target="_self"&gt;transfer learning&lt;/A&gt; technique, has had a huge impact on large and small companies (example use cases include&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/blog/bing-delivers-its-largest-improvement-in-search-experience-using-azure-gpus/" target="_self"&gt;Microsoft&lt;/A&gt;, &lt;A href="https://www.blog.google/products/search/search-language-understanding-bert/" target="_self"&gt;Google&lt;/A&gt;,&amp;nbsp;&lt;A href="https://multithreaded.stitchfix.com/blog/2019/07/15/give-me-jeans/" target="_self"&gt;Stitch Fix&lt;/A&gt;). Since Automated ML uses a BERT model that has already been pretrained on a large corpus of text, the user of Automated ML doesn't need very much training data to see a lot of benefit (even ~ 100s of rows are okay in some circumstances), which can be very valuable if labeled data is hard or expensive to acquire. We’ve implemented BERT in automated ML in such a way that it is first &lt;A href="http://wiki.fast.ai/index.php/Fine_tuning" target="_self"&gt;fine-tuned&lt;/A&gt; on the dataset the user provides, and then automated ML uses the&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Sentence_embedding" target="_self"&gt;embeddings&lt;/A&gt; from the fine-tuned BERT model as text features for other ML algorithms like logistic regression or LightGBM.&amp;nbsp; Our implementation of BERT uses the popular transformers repository (&lt;A href="https://arxiv.org/abs/1910.03771" target="_self"&gt;paper&lt;/A&gt;, &lt;A href="https://github.com/huggingface/transformers" target="_self"&gt;github&lt;/A&gt;).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What does deep learning for text data do that older techniques don’t do?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Deep learning for text data gives you more accurate models compared to bag of words-based approaches to handling text data.&amp;nbsp; Bag of words-style features used for text in Automated ML include unigrams, bigrams, and tri-character grams.&amp;nbsp; Another text feature sometimes used in automated ML are static pretrained &lt;A href="https://en.wikipedia.org/wiki/Word_embedding" target="_self"&gt;word embeddings&lt;/A&gt;.&amp;nbsp; It’s not big news anymore that some deep learning architecture outperforms older “shallower” learning techniques, but what really grabbed our attention was seeing how much BERT tends to outperform bag of words type approaches on small training data (100s of rows) as compared to larger data of 10000 rows or more.&amp;nbsp; In general, we’ve found that BERT can often get the same accuracy as the bag of words approach with only&amp;nbsp; 1/10 of the data!&amp;nbsp; This ability to learn from small data can really benefit your product if, as is often the case, labeled data is difficult or expensive to get.&amp;nbsp; To illustrate this, we ran Automated ML model with and without BERT on a four class news dataset (called &lt;A href="http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html" target="_self"&gt;AG News&lt;/A&gt;) and plot the learning curve in Figure 1.&amp;nbsp; To illustrate the value of pretraining (both through BERT and pretrained word embeddings), we also trained a logistic regression model with unigram and bigram features as a simple baseline.&amp;nbsp; &amp;nbsp;Notably, automated ML with BERT achieves 94.7% accuracy on AG News when trained with 120k rows, which would put it at 4th place on &lt;A href="https://paperswithcode.com/sota/text-classification-on-ag-news" target="_self"&gt;this leaderboard&lt;/A&gt; for ag news as of this writing.&amp;nbsp; To ensure that training does not take too long and to avoid GPU memory issues, automated ML uses a smaller BERT model (called bert-base-uncased) that will run on any Azure GPU VM.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="learning_curve.png" style="width: 720px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/173548iCCFB8BDAF7A93DB2/image-size/large?v=v2&amp;amp;px=999" role="button" title="learning_curve.png" alt="Figure 1: Learning curves in for bag of words style features vs BERT features for the AG News dataset." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 1: Learning curves in for bag of words style features vs BERT features for the AG News dataset.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What's so special about&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;BERT that makes it so much better than bag of words? &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Theoretically BERT has a big advantage over bag of words methods in that BERT is sensitive to word order and long range word interdependencies (e.g. the meaning of "it" might depend on a particular word 10s of words to the left of "it"), but we were curious how this difference plays out with real world data. So we examined how well BERT vs bag of words does in our hold out set for this news classification dataset. After looking at many examples and the predictions from BERT versus simpler methods, we did notice a pattern: When a news article is misclassified by bag of words methods, it often contains one to a few words that shift the meaning of the entire document. To illustrate this, here’s two examples that we constructed and fed into the models trained on the ag news dataset.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;EM&gt;1. “The two players were evenly matched, but the first player's skill with the &lt;/EM&gt;&lt;FONT color="#FF9900"&gt;&lt;STRONG&gt;joystick &lt;/STRONG&gt;&lt;/FONT&gt;&lt;EM&gt;pushed her over the top.”&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;EM&gt;2. "The two players were evenly matched, but the first player's skill with the &lt;/EM&gt;&lt;FONT color="#FF9900"&gt;&lt;STRONG&gt;hockey stick&lt;/STRONG&gt;&lt;/FONT&gt;&lt;EM&gt; pushed her over the top. "&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The sentence with "hockey stick" is easy to classify as being about "sports", and indeed the bag of words approach and BERT correctly classify it. The sentence with "joystick" is harder to classify because it has a lot of words ordinarily associated with sports, but it's actually in the "science &amp;amp; tech" category.&amp;nbsp; For the "joystick" sentence, BERT correctly predicts that it's in the "science &amp;amp; tech" category, while the bag of words-based model incorrectly predicts the "joystick" sentence as being about sports.&amp;nbsp; What’s particularly interesting about this example is how just one word, “joystick”, or phrase, “hockey stick”, dramatically changes the meaning of the document from being about video games to being about physical sports. This is why the bag of words approach fails for the "joystick" sentence, the preponderance of strongly “sports”-like words pushes the prediction to sports. BERT, on the other hand, doesn’t just model words as a static collection of distinct things (aka “bag of words”), but rather contains sophisticated mechanisms that ensure the features of one word both depend on that word’s position in a document and also depend on the other words in the document around it. This way it can know that a “player” in this document is actually a video game “player” by virtue of the presence of “joystick” elsewhere in the document. This sophistication in BERT, we hypothesize, is the reason BERT isn’t fooled into thinking a video game competition text snippet like example 1 is about physical sports.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How Is BERT integrated into Automated ML?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;BERT is used in the featurization layer of Automated ML. In this layer we detect if a column contains free text or other types of data like timestamps or simple numbers and we featurize accordingly. For BERT we fine-tune/train the model by utilizing the user-provided labels, then we output document embeddings (for BERT these are the final hidden state associated with the special [CLS] token) as features alongside other features like timestamp-based features (e.g. day of week) or numbers that many typical datasets have. Please see Figure 2 for a schematic of this.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="schematic.png" style="width: 799px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/173683i9D5F7F6CA132C20B/image-size/large?v=v2&amp;amp;px=999" role="button" title="schematic.png" alt="Figure 2.  How a dataset with columns of data consisting of a timestamp, a count of some sort, two text columns, and class labels get featurized by BERT in Azure Automlated ML.  The blue blocks represents the data, which starts out as the raw inputs, and eventually get transformed into predictions, and the gray elements represent the machine learning pipeline." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 2.  How a dataset with columns of data consisting of a timestamp, a count of some sort, two text columns, and class labels get featurized by BERT in Azure Automlated ML.  The blue blocks represents the data, which starts out as the raw inputs, and eventually get transformed into predictions, and the gray elements represent the machine learning pipeline.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;It’s worth noting that with BERT we don’t technically need to train it as it’s pretrained on a large corpus of text. However, there’s no good way to get document embeddings unless BERT is fine tuned. One way to see what fine-tuning does to embeddings is to visualize BERT generated document embeddings using a &lt;A href="https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding" target="_self"&gt;t-SNE&lt;/A&gt; plot (this visualization method places points close together in 2 dimensions according to the probability they are close together in the original ~ few hundred dimensional embeddings space). We created two 2D t-SNE plots: one where BERT has been trained on 1% of a dataset vs another BERT model that was trained on the full dataset. Each point represents a document, and its color is the ground-truth class label of that document. Both of these models use the same four class text dataset. In the 1% case you can see that the embeddings don’t display much class structure since most points belong to one blob. On the right where BERT was trained on the full dataset the class structure is much more obvious. So it’s now apparent that fine-tuning BERT is quite important if you want BERT to generate embeddings that “know” about the particularities of a dataset!&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="embeddings (5).png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/174217i781EE683D2F3B5B2/image-size/large?v=v2&amp;amp;px=999" role="button" title="embeddings (5).png" alt="Figure 3: BERT document embeddings (coming the final hidden state of the special [CLS] token).  Note there is not much structure when BERT is trained on small fraction of a four class dataset, but on the full dataset the four classes are clearly present." /&gt;&lt;span class="lia-inline-image-caption" onclick="event.preventDefault();"&gt;Figure 3: BERT document embeddings (coming the final hidden state of the special [CLS] token).  Note there is not much structure when BERT is trained on small fraction of a four class dataset, but on the full dataset the four classes are clearly present.&lt;/span&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With the gains of BERT for training data both big and small in mind, we conclude with a recommendation for when you should use BERT vs when you might want to use a bag of words-based model. If you need predictions to be very fast (like &amp;lt; few milliseconds per prediction) and/or you want to perform predictions on a CPU, then you should not use BERT and should stick with bag of words-based models. In the end, you will need to make a choice regarding this trade-off between the fast inference time of bag of words-type models and the high accuracy of BERT.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;So how can I try BERT in Azure Automated ML?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To get started with Azure automated machine learning, you can read our docs &lt;A href="https://docs.microsoft.com/azure/machine-learning/concept-automated-ml#how-automated-ml-works" target="_self"&gt;here&lt;/A&gt;.&amp;nbsp; If you're comfortable with python, you can jump right into our &lt;A href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb" target="_self"&gt;Jupyter notebook that illustrates BERT&lt;/A&gt;.&amp;nbsp; The main thing to keep in mind is that to benefit from BERT, you need to&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Have a classification dataset with 1 or more columns of text (you can have other columns of e.g. categorical data, but BERT will only train on the text column(s) as indicated by Figure 2)&lt;/LI&gt;
&lt;LI&gt;Select an Azure VM with a GPU (e.g. "&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/nc-series" target="_self"&gt;STANDARD_NC6&lt;/A&gt;").&amp;nbsp; If you're creating the compute using the python sdk, then see our &lt;A href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb" target="_self"&gt;BERT notebook&lt;/A&gt;&amp;nbsp;for how select a GPU.&amp;nbsp; If you're using &lt;A href="https://ml.azure.com/" target="_self"&gt;Azure Machine Learning Studio&lt;/A&gt; (i.e. you're using Automated ML from a browser-based UI), when prompted to select vm size, be sure to select a GPU-enabled vm size and when selecting task type, check the "enable deep learning" checkbox.&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Once your automated ML run is complete, you can use your trained automl model to do inference on a GPU or a&amp;nbsp;CPU Azure VM, just note that performing inference on a GPU will be much faster.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Contributors (alphabetical order):&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Eric Clausen-Brown – Senior Data &amp;amp; Applied Scientist&lt;/P&gt;
&lt;P&gt;Zubin Pahuja – Software Engineer&lt;/P&gt;
&lt;P&gt;Anup Shirgaonkar – Principal Data &amp;amp; Applied Scientist&lt;/P&gt;
&lt;P&gt;Arjun Singh – Intern Data &amp;amp; Applied Scientist&lt;/P&gt;</description>
      <pubDate>Tue, 03 Mar 2020 16:54:10 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/ba-p/1194657</guid>
      <dc:creator>eric-cb</dc:creator>
      <dc:date>2020-03-03T16:54:10Z</dc:date>
    </item>
    <item>
      <title>Azure and OpenCV partner on Deep Learning with PyTorch course</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/azure-and-opencv-partner-on-deep-learning-with-pytorch-course/ba-p/1191411</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;A style="padding: 10px 10px; background-color: #0078d4; color: white; text-decoration: none;" href="https://opencv.org/courses/" target="_blank" rel="noopener"&gt;Register Now&lt;/A&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;OpenCV (Open Source Computer Vision Library) is the leading open source library and community for computer vision, image processing, and machine learning. The library has been used for real-time tasks such as monitoring mine equipment in China, detecting intrusions through surveillance video in Israel, and product label inspection in factories around the world.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;The non-profit organization is dedicated to advancing the beneficial uses of computer vision within society. As part of this mission, OpenCV has committed itself to creating online courses in AI to educate a global workforce. These courses are geared towards anyone interested in learning AI and designed to equip students with practical skills for solving real-world problems and help them stand out in the job market.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Microsoft shares OpenCV's commitment to making an AI education accessible for everyone. Today, we're excited to announce that Azure AI is partnering with OpenCV to offer 100 hours of free GPU credit to all students enrolled in their &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopencv.org%2Fcourses%2F%23bundles&amp;amp;data=02%7C01%7CCourtney.Luk%40microsoft.com%7C8777d9d531bc412135a008d7b9516b5e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637181631648653153&amp;amp;sdata=n4xgh1cdHbABvXHfTWdIVGeNpPqtNYyFUh6rCyzLSUY%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;Deep Learning with PyTorch course&lt;/A&gt; that will be taught on Azure.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;I&gt;&amp;nbsp;&lt;/I&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;I&gt;"OpenCV.org is honored to receive 100 hours of free GPU time by Microsoft on its Azure Platform for our students enrolled in the Deep Learning with PyTorch course. The notebook based Azure platform is easy for beginners and very flexible for AI practitioners. Not only is Microsoft leading AI research, it is helping make AI education accessible to students around the world while promoting open standards. We are thankful for Microsoft's commitment to the open source community."&lt;/I&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;-Satya Mallick, CEO of OpenCV&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Deep Learning with PyTorch will first provide students with a theoretical understanding of simple neural nets and then gradually move to explore Deep Neural Nets and Convolutional Neural Networks. It will then give them hands-on experience for successfully training Deep Neural Networks, which requires a significant amount of time and compute. By providing students with GPU credit for their coursework, we hope to provide them with the best learning experience that sets them up for success.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Along with this hands-on education, OpenCV's Deep Learning with PyTorch course will provide students with a discussion forum where they can collaborate with one another on questions and challenges and get help from dedicated mentors. Upon completion of the course, students will receive a certification that recognizes them as OpenCV certified AI practitioners.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;The Deep Learning with PyTorch course was originally only open to those who committed through OpenCV's &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.kickstarter.com%2Fprojects%2Fsatyamallick%2Fai-courses-by-opencvorg&amp;amp;data=02%7C01%7CCourtney.Luk%40microsoft.com%7C8777d9d531bc412135a008d7b9516b5e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637181631648663141&amp;amp;sdata=2uDqQgNOzYYYzrspJaknnATv2SL3cb1Tjxqc6JJhdKA%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;AI Courses Kickstarter&lt;/A&gt;, but registration has been re-opened to anyone interested starting today (February 24). Course registration will remain open for the next 8 days on &lt;A href="https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopencv.org%2Fcourses%2F%23bundles&amp;amp;data=02%7C01%7CCourtney.Luk%40microsoft.com%7C8777d9d531bc412135a008d7b9516b5e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637181631648663141&amp;amp;sdata=lFNQ0SH75cV1SK6dsUGabVZU%2B9YTqXhZLKvc%2BynffB4%3D&amp;amp;reserved=0" target="_blank" rel="noopener"&gt;OpenCV.org's course site&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="DLWPT.png" style="width: 999px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/175514iFB78C9F65ED78797/image-size/large?v=v2&amp;amp;px=999" role="button" title="DLWPT.png" alt="DLWPT.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2020 16:45:04 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/azure-and-opencv-partner-on-deep-learning-with-pytorch-course/ba-p/1191411</guid>
      <dc:creator>Courtney_Luk</dc:creator>
      <dc:date>2020-03-05T16:45:04Z</dc:date>
    </item>
    <item>
      <title>(Nearly) Everything you need to know about computer vision in one repo</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/nearly-everything-you-need-to-know-about-computer-vision-in-one/ba-p/1070311</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&lt;EM&gt;This post was co-authored by&amp;nbsp;@JS Tan,&amp;nbsp;@Patrick Buehler, &lt;LI-USER uid="8372"&gt;&lt;/LI-USER&gt; and @Jun Ki Min&lt;/EM&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;In recent years, we've seen extraordinary growth in Computer Vision, with applications in image understanding, search, mapping, semi-autonomous&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;or&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;autonomous v&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ehicles&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and many more&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The ability for models to understand actions&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;in a video&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;a task that was&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;unthinkable just a few years ago&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, is now something that we can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;achieve with relatively high accuracy&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and in near real-time&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="action_recognition2.gif" style="width: 405px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/162163i5F2EA32E6EDC8F94/image-size/large?v=v2&amp;amp;px=999" role="button" title="action_recognition2.gif" alt="action_recognition2.gif" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P style="text-align: left;"&gt;&amp;nbsp;Action Recognition&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;However,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the field is&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;n&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;o&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;t&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;particularly&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;welcoming for newcomers.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Without prior experience or guidance, building&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;an accurate classifier c&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;an easily take&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;weeks. Unless you're ready to spend&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;a&amp;nbsp;long-time&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;learning computer vision, it's extremely hard to master the basics, let alone begin to explore some of the cutting-edge technologies in the field. Even for computer vision experts, building a quick&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Proof of Concept (POC)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;can be&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;non&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;trivial and could easily end up taking&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;many&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;days&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to put together.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;At&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, we have been working for many years on diverse Computer Vision solutions for our customers and collected our learnings into our new&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;public&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft repo&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sitory:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/microsoft/ComputerVision-recipes" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;https://github.com/microsoft/ComputerVisio&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;n-recipes&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;e goal of th&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;is repository&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;is to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;provide examples and best practice guidelines for building computer vision systems&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;on Azure&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, and to share this with the open-source community&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;More specifically, o&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;ur goal was to create a repository that will help us to provide solutions&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;rapidly&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;community&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;customers&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;that we work with&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;, or with on-boarding new team members who may have expertise in data science, but not specifically in computer vision.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;From mastering some of the most common scenarios in the field, like image classification, object detection&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;and image similarity, to exploring cutting edge scenarios like activity recognition and crowd counting, this repo will guide you through building models, fine-tuning them, and using them in real-world scenarios.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;We're kicking off our repo with 5 scenarios:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;TABLE style="height: 504px;" data-tablestyle="MsoTableGridLight" data-tablelook="1696"&gt;
&lt;TBODY&gt;
&lt;TR style="height: 30px;"&gt;
&lt;TD style="height: 30px; width: 107px;" data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Scenario&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 30px; width: 77px;" data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Support&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 30px; width: 539px;" data-celllook="0"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Description&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="height: 57px;"&gt;
&lt;TD style="height: 57px; width: 107px;" data-celllook="0"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/ComputerVision/tree/master/scenarios/classification" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Classification&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 57px; width: 77px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Base&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 57px; width: 539px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Image Classification&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;is a way to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;learn and predict the category of a given image.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ex: Is the picture of a ‘dog’ or a ‘cat’?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="height: 111px;"&gt;
&lt;TD style="height: 111px; width: 107px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/ComputerVision/tree/master/scenarios/similarity" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Similarity&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 111px; width: 77px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Base&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 111px; width: 539px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Image Similarity is a way to compute a similarity score given a pair of images. Given an image, it allows you to identify the most similar image&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;dataset.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ex: This picture of a dog is the most&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;like&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;which of the following images of animals?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="height: 84px;"&gt;
&lt;TD style="height: 84px; width: 107px;" data-celllook="0"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/ComputerVision/tree/master/scenarios/detection" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Detection&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 84px; width: 77px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Base&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 84px; width: 539px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Object Detection is a supervised machine learning technique that allows you to detect where on a given image an object of interest is.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ex: Where in the image are there&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;animals&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="height: 84px;"&gt;
&lt;TD style="height: 84px; width: 107px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/ComputerVision/tree/master/contrib/action_recognition" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Action Recognition&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 84px; width: 77px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Contrib&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 84px; width: 539px;" data-celllook="65536"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Action Recognition is used to identify in video footage what actions are performed and at what respective start/end times.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;(&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Ex: When is there someone&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;drinking&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;in the video?&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR style="height: 138px;"&gt;
&lt;TD style="height: 138px; width: 107px;" data-celllook="0"&gt;
&lt;P&gt;&lt;A href="https://github.com/microsoft/ComputerVision/tree/master/contrib/crowd_counting" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Crowd Counting&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 138px; width: 77px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Contrib&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD style="height: 138px; width: 539px;" data-celllook="0"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Crowd Counting is a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;use-case that leverages&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;supervised machine learning technique&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;s&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;to count the number of people in an image – this applies to both low-crowd-density (e.g. less than&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;0 people) and high-crow&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;d-density (e.g. thousands of people). (Ex. How many p&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;edestrians&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;are in this image of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;a street&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;?)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:120,&amp;quot;335559739&amp;quot;:120,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Rather than creating implementations from scratch, we draw from popular state-of-the-art libraries (e.g.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;fast.ai&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://pytorch.org/docs/stable/torchvision/index.html" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;torchvision&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;we&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;build additional utility around loading image data, optimizing&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;models&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;evaluating models. In addition, we aim to answer the frequently asked questions, try to explain the deep learning&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;intuitions,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and highlight common pitfalls.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Whether you&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;a&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;re an expert in computer vision or just getting your hands wet, we believe this repository offers something for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;you&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. For the beginner, this repo will guide you through building a state-of-the-art model and help you develop an intuition for the craft. For the experts, this repo&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sitory&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;can quickly get you to a strong baseline model which is easy to extend using custom Python/Py&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;T&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;orch&amp;nbsp;code.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;In addition, the repository also aims to provide support with 1) the full data science process, and 2) the tooling to succeed on Azure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We hope that these examples and utilities will make it easier and faster for developers to create custom vision applications.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;The Data Science Process&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The Computer Vision&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Recipes&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;GitHub repository shows you how to approach the five key steps of the data science process&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and provides utilities to enrich each of the steps&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Data preparation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;- Prepar&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and load&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;your&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;data&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:40,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Modeling&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;-&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Build models using&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;deep learning algorithms&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:40,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Evaluating&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;–&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Evaluat&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e your model. Depending on the metric you’re interested in optimizing, you may want to explore different methods of evaluation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:40,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Model selection and optimization&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;- Tun&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and optimiz&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;hyperparameters&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to get the highest performing model.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Because Computer Vision models are often computationally costly, we show you how to seamlessly scale your parameter tuning into Azure&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:40,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Operationalizing&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;- Operationaliz&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;models in a production environment on Azure&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;by deploying it onto Kubernetes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:40,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Inside the computer vision&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;recipes&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;repo, we&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;ha&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ve added a lot of utility&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;to support common tasks such as loading datasets in the format expected by different algorithms, splitting training/test data&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, and evaluating model outputs&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Machine Learning&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This computer vision repository also has deep integration with the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/azure/machine-learning/?WT.mc_id=azureai-blog-azureai" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Azure Machine Learning service&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;to complement your work locally.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;W&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;e provide code examples on how you can optionally and easily scale your training into the cloud, and how you can deploy your models for production workloads.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure Cognitive Services&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Note that for certain computer vision problems, you may not need to build your own models. Instead, pre-built or easily customizable solutions exist which do not require any custom coding or machine learning expertise.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/computer-vision/?WT.mc_id=azureai-blog-azureai" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Vision Services&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;are a set of pre-trained REST APIs which can be called for image tagging, OCR, video analytics, and more. These APIs work out of the box and require minimal expertise in machine&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;learning but&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;have limited customization capabilities. See the various demos available to get a feel for the functionality (e.g. Computer Vision).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI data-leveltext="" data-font="Symbol" data-listid="2" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"&gt;&lt;A href="https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/?WT.mc_id=azureai-blog-azureai" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;Custom Vision&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is a SaaS service to train and deploy a model as a REST API given a user-provided training&amp;nbsp;set.&amp;nbsp;All steps including image upload, annotation, and model deployment can be performed using either the UI or a Python SDK. Training image classification or object detection models can be achieved with minimal machine learning expertise. The Custom Vision offers more flexibility than using the pre-trained cognitive&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;services APIs but&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;requires the user to bring and annotate their own data.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Before using&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;the Computer Vision&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;repository, we strongly recommend evaluating if these can sufficiently solve your problem.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Scenario Example:&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Object Detection&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;To give you a sense of how you can use our repo to build a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;state of the art (SOTA)&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;model,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;here is&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;a preview of how simple it is to create an Object Detection model.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Of course,&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;you can go much deeper and&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;add custom&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;P&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;y&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;T&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;orch&amp;nbsp;code, but getting started is as&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;simple as&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;this&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;1. Load your data&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;The first step is to load your data – we help you do this with a simple object that automatically parses your data and the annotations:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from utils_cv.detection.data import DetectionLoader 
data = DetectionLoader("path/to/data") &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;2. Train/fine-tune your model&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Then we create a 'learner' object that helps you manage and train your model. By default, it will use&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;torchvision's&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;Faster R-CNN model. But you can easily switch it out.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from utils_cv.detection.model import DetectionLearner 
detector = DetectionLearner(data) 
detector.fit() &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;3. Evaluate&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233279&amp;quot;:true,&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Finally, lets evaluate our model using the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;built-in&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;helper functions. We can&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;look&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;at the precision and recall curves to give us a sense of how our model is performing.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from utils_cv.detection.plot import plot_pr_curves 
eval = detector.evaluate() 
plot_pr_curves(eval) &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;As we continue to build out of repository,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;we will&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;be looking for new computer vision scenarios to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;unlock&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;. Feel free to reach out to&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;cvbp@microsoft.com&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;or post an issue if you wish to see us cover a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;scenario&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:1,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:285}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Mar 2020 17:53:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/nearly-everything-you-need-to-know-about-computer-vision-in-one/ba-p/1070311</guid>
      <dc:creator>jiata</dc:creator>
      <dc:date>2020-03-03T17:53:06Z</dc:date>
    </item>
    <item>
      <title>Accelerating Distributed Training in Azure Machine Learning service using SR-IOV</title>
      <link>https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050</link>
      <description>&lt;P&gt;&lt;EM&gt;Author: Ravi Shankar Kolli&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This post is co-authored by Mathew Salvaris, Aashna Garg, Vaibhav Jain, Reyhan Patia, Caghan Demirci, Alex Sutton&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Today’s state of the art deep learning models like BERT require distributed multi machine training to reduce training time from weeks to days. Interconnect is one of the key components to reduce communication overhead and achieve good scaling efficiency in distributed multi machine training.&lt;/P&gt;
&lt;P&gt;Azure Machine Learning users can now speed up their training time by taking advantage of the Azure Virtual Machines&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt; with SR-IOV and InfiniBand&amp;nbsp;&lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/hpc/enable-infiniband" target="_self"&gt;support&lt;/A&gt;. In September 2018, Azure introduced the NC, ND, and H-series of VMs dedicated InfiniBand networks. All RDMA-enabled sizes are capable of leveraging that network using Intel MPI. SR-IOV stands for “single root input/output virtualization” which optimizes sharing of PCI Express devices in a system with virtual machines. In Azure, SR-IOV for InfiniBand enables near bare-metal performance for any MPI library.&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;MPI, or message-passing interface, is a communication library&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;commonly used for distributed training between GPUs on many systems. Nvidia’s NCCL software uses MPI to make distributed training easier in deep learning frameworks like&amp;nbsp;PyTorch&amp;nbsp;and TensorFlow.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure now supports using any MPI library with SR-IOV enabled VM families such as NCv3, NDv2, and HC or HB for HPC applications. Older GPU hardware with InfiniBand such as NCv2 and NDv1 will be updated for SR-IOV in 2020.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:6,&amp;quot;335551620&amp;quot;:6,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Intel MPI version 5.x will continue to be supported as w&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;i&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ll all subsequent Intel MPI versions.&amp;nbsp; In addition, all other MPIs supported by the Open Fabric Enterprise Distribution (OFED),&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;OpenMPI&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, and Nvidia’s NCCL2 library, providing optimized performance for GPUs&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;are&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;supported.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;These enhancements will provide customers with higher InfiniBand bandwidth, lower latencies, and most importantly, better distributed application performance. Infiniband connectivity provides higher throughput and lower latencies compared to the ethernet based connection. SR-IOV enables communication over an Infiniband network using any flavor of MPI. A reference implementation of Bert in Azure Machine Learning using SR-IOV and Infiniband can be found on &lt;A href="https://github.com/microsoft/AzureML-BERT" target="_blank" rel="noopener"&gt;Github&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;FONT size="5"&gt;Throughput Improvement in BERT&lt;/FONT&gt;&lt;/H2&gt;
&lt;P&gt;SR-IOV and Infiniband provided up to 75% improvement in the throughput of BERT Large model. When SR-IOV is enabled, throughput improves to about 28 sequences/second/GPU which is 75% better than the baseline. Below charts show the throughput improvement of BERT large pretraining on 16 Azure StandardNC24s_v3 VMs. Model is in PyTorch and used Torch.Distributed and Open MPI for multi-node training. Note that the below charts do not reflect the best throughput of BERT on Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_27.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160981i4340AF21610DDDC8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_27.png" alt="clipboard_image_27.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_28.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160980i808699330F9EECDC/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_28.png" alt="clipboard_image_28.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;Throughput Improvement in ResNet&lt;/H2&gt;
&lt;P&gt;In order to observe the improvements in speed for PyTorch we ran a selection of &lt;STRONG&gt;ResNet &lt;/STRONG&gt;models from Torchvision on synthetic data at &lt;STRONG&gt;full precision&lt;/STRONG&gt;. This allowed us to estimate the throughput without having to worry about IO overhead. Below we can see figures for clusters with SR-IOV enabled vs those that didn’t have SR-IOV. We were using &lt;STRONG&gt;NC24rs&lt;/STRONG&gt;&lt;STRONG&gt;_v3&lt;/STRONG&gt; vms each equipped with 4 V100 GPUs. Therefore, when we report 8 GPUs it is across 2 nodes and 16 is across 4. We can see that across models and GPU configurations SR-IOV offers 2-3 times improvement over No SR-IOV.&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_29.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160982iAB5179FC2820892E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_29.png" alt="clipboard_image_29.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_30.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160983iD119CB268EE02780/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_30.png" alt="clipboard_image_30.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_31.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160985i9E22C1B81C4D5A10/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_31.png" alt="clipboard_image_31.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;In the figures below the number reported in the center of the bar is the scaling efficiency on RDMA-enabled VMs. As we can see for Horovod and DistributeDataParallel both using NCCL the scaling efficiency is over 90% across all three models with the performance almost doubling with the doubling of GPUs.&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_32.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160984iE3D4851FD4A87F6D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_32.png" alt="clipboard_image_32.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_33.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160986iA770BEB47E255349/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_33.png" alt="clipboard_image_33.png" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="clipboard_image_34.png" style="width: 400px;"&gt;&lt;img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/160987i126C10EEE00DF80C/image-size/medium?v=v2&amp;amp;px=400" role="button" title="clipboard_image_34.png" alt="clipboard_image_34.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H1&gt;&amp;nbsp;&lt;/H1&gt;
&lt;H2&gt;Summary&lt;/H2&gt;
&lt;P&gt;SR-IOV yielded significant throughput improvements to distributed multi machine training. Bert large throughput increased by 75% with SR-IOV and certain Resnet models were faster by about 2-3x with SR-IOV. Throughput also scaled linearly on Resnet models as the number of NC24rs_v3 nodes scaled from 1 to 2, 4 and 8 instances.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Stay tuned for our next blog on scaling distributed deep learning training on Azure &lt;A href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series" target="_blank" rel="noopener"&gt;NDv2&lt;/A&gt; VMs. These VMs feature 8 NVIDIA Tesla V100 NVLINK interconnected GPUs, 32GB HBM2 memory per GPU and 100Gbps EDR Infiniband interconnect.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started with Distributed Deep Learning training on &lt;A href="https://azure.microsoft.com/en-us/services/machine-learning/" target="_blank" rel="noopener"&gt;Azure Machine Learning&lt;/A&gt;. Report any implementation issues or observed throughput improvements of SR-IOV on Azure Machine Learning at &lt;A href="https://stackoverflow.com/questions/tagged/azure-machine-learning-service" target="_blank" rel="noopener"&gt;Stack Overflow&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Dec 2019 18:13:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050</guid>
      <dc:creator>Ravi_Kolli</dc:creator>
      <dc:date>2019-12-11T18:13:06Z</dc:date>
    </item>
  </channel>
</rss>

