Train smarter with NVIDIA pre-trained models and TAO Transfer Learning Toolkit on Microsoft Azure

Published Jul 08 2021 08:56 AM 3,238 Views
Regular Visitor

One of the many challenges of deploying AI on edge is that IoT devices have limited compute and memory resources. So, it becomes extremely important that your model is accurate and compact enough to deliver real-time inference at the edge. Juggling between the accuracy of the model and the size is always a challenge when creating a model; smaller, shallower networks suffer from poor accuracy and deeper networks are not suitable for edge. Additionally, achieving state-of-the-art accuracy requires collecting and annotating large sets of training data and deep domain expertise, which can be cost-prohibitive for many enterprises looking to bring their AI solutions to market faster.  NVIDIA’s catalog of pre-trained models and Transfer Learning Toolkit (TLT) can help you accelerate your model development. TLT is a core component of the NVIDIA TAO, an AI-model-adaptation platform. TLT provides a simplified training workflow geared for the non-experts to quickly get started building AI using pre-trained models and guided Jupyter notebooks. TLT offers several performance optimizations that make the model compact for high throughput and efficiency, accelerating your Computer Vision and Conversational AI applications.

 

Training is compute-intensive, requiring access to powerful GPUs  to speed up the time to solution. Microsoft Azure Cloud offers several GPU optimized Virtual machines (VM)  with access to NVIDIA A100, V100 and T4 GPUs.

In this blog post, we will walk you through the entire journey of training an AI model starting with provisioning a VM on Azure to training with NVIDIA TLT on Azure cloud.

 

Pre-trained models and TLT

 

Transfer Learning is a training technique where you leverage the learned features from one model to another. Start with a pretrained model that has been trained on representative datasets and fine-tuned  with weights and biases. These models can be easily retrained with custom data in a fraction of the time it takes to train from scratch.


TLT_on_azure.png

Figure 1 - End-to-end AI workflow

 

The NGC catalog, NVIDIA’s hub of GPU-optimized AI and HPC software contains a diverse collection of pre-trained models for computer vision and conversational AI use cases that span industries from manufacturing, to retail to healthcare and more. These models have been trained on images and large sets of text and speech data to provide you with a highly accurate model to start with. For example, People detection and segmentation and body pose estimation models can be used to extract occupancy insights in smart spaces such as retail, hospitals, factories, offices, etc. Vehicle and License plate detection and recognition models can be used for smart infrastructure. Automatic speech recognition (ASR) and Natural language processing (NLP) models can be used for smart speakers, video conferencing, automated chatbots and others. In addition to these highly specific use case models, you also have the flexibility to use the general purpose pre-trained models from popular open model architectures such as ResNet, EfficientNet, YOLO, UNET, and others. These can be used for general use cases in object detection, classification and segmentation.

 

Once you select your pre-trained model, you can fine-tune the model on your dataset using TLT. TLT is a low-code Jupyter notebook based workflow, allowing you to adapt an AI model in hours, rather than months. The guided Jupyter notebook and configurable spec files make it easy to get started.

 

Here are few key features of TLT to optimize inference performance:

  • Model pruning removes nodes from neural networks while maintaining comparable accuracy, making the model compact and optimal for edge deployment without sacrificing accuracy.
  • INT8 quantization enables the model to run inference at lower INT8 precision, which is significantly faster than running in floating point FP16 or FP32

Pruning and quantization can be achieved with a single command in the TLT workflow.

Setup an Azure VM

 

We start by first setting up an appropriate VM on Azure cloud. You can choose from the following VMs which are powered by NVIDIA GPUs - ND 100, NCv3 and NC T4_v3 series. For this blog, we will use the NCv3 series which comes with V100 GPUs. For the base image on the VM, we will use the NVIDIA provided GPU-optimized image from Azure marketplace. NVIDIA base image includes all the lower level dependencies which reduces the friction of installing drivers and other prerequisites. Here are the steps to setup Azure VM

 

Step 1 - Pull the GPU optimized image from Azure marketplace by clicking on the “Get it Now” button.

cshah31_1-1625755027598.png

Figure 2 - GPU optimized image on Azure Marketplace

 

Select the v21.04.1 version under the Software plan to select the latest version. This will have the latest NVIDIA drivers and CUDA toolkit. Once you select the version, it will direct you to the Azure portal where you will create your VM.

cshah31_2-1625755027605.png

Figure 3 - Image version selection window

 

Step 2 - Configure your VM

In the Azure portal, click “Create” to start configuring the VM.

cshah31_3-1625755027614.png

Figure 4 - Azure Portal

 

This will pull the following page where you can select your subscription method, resource group, region and Hardware configuration. Provide a name for your VM. Once you are done you can click on the “Review + Create” button at the end to do a final review.

Note: The default disk space is 32GB. It is recommended to use >128GB disk for this experiment

 

cshah31_4-1625755027625.png

Figure 5 - Create VM window

 

Make the final review of the offering that you are creating. Once done, hit the “Create” button to spin up your VM in Azure.

Note: Once you create, you will start incurring cost, so please review the pricing details.

 

cshah31_5-1625755027637.png

Figure 6 - VM review

 

Step 3 - SSH in to your VM

Once your VM is created, SSH into your VM using the username and domain name or IP address of your VM.

 

 

ssh <username>@<IP address>

 

 

 

Training 2D Body Pose with TLT

 

In this step, we will walk through the steps of training a high performance 2D body pose model with TLT. This is a fully convolutional model and consists of a backbone network, an initial prediction stage which does a pixel-wise prediction of confidence maps (heatmap) and part-affinity fields (PAF) followed by multistage refinement (0 to N stages) on the initial predictions. This model is further optimized by pruning and quantization. This allows us to run this in real-time on edge platforms like NVIDIA Jetson.

In this blog, we will focus on how to run this model with TLT on Azure but if you would like to learn more about the model architecture and how to optimize the model, check out the two part blog on Training/Optimization 2D body pose with TLT - Part 1 and Part 2. Additional information about this model can be found in the NGC Model card.

 

Step 1 - Setup TLT

For TLT, we require a Python Virtual environment. Setup the Python Virtual Environment. Run the commands below to set up the Virtual environment.

 

 

sudo su - root
usermod -a -G docker azureuser
apt-get -y install python3-pip unzip
pip3 install virtualenvwrapper
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
mkvirtualenv launcher -p /usr/bin/python3

 

 

 

Install Jupyterlab and TLT Python package. TLT uses a Python launcher to launch training runs. The launcher will automatically pull the correct docker image from NGC and run training on it. Alternatively, you can also manually pull the docker container and run it directly inside the docker. For this blog, we will run it from the launcher.

 

 

pip3 install jupyterlab
pip3 install nvidia-pyindex
pip3 install nvidia-tlt

 

 

 

Check if TLT is installed properly. Run the command below. This will dump a list of AI tasks that are supported by TLT.

 

 

tlt info --verbose

Configuration of the TLT Instance

dockers:
         nvcr.io/nvidia/tlt-streamanalytics:
               docker_tag: v3.0-py3
               tasks:
1. augment
2. classification
3. detectnet_v2
4. dssd
5. emotionnet
6. faster_rcnn
7. fpenet
8. gazenet
9. gesturenet
10. heartratenet
11. lprnet
12. mask_rcnn
13. retinanet
14. ssd
15. unet
16. yolo_v3
17. yolo_v4
18. tlt-converter
         
          nvcr.io/nvidia/tlt-pytorch:
          docker_tag: v3.0-py3

tasks:

1. speech_to_text
2. text_classification
3. question_answering
4. token_classification
5. intent_slot_classification
6. punctuation_and_capitalization
format_version: 1.0

tlt_version: 3.0

published_date: mm/dd/yyyy

 

 

 

Login to NGC and download Jupyter notebooks from NGC

 

 

docker login nvcr.io

cd /mnt/
sudo chown azureuser:azureuser /mnt/

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip

unzip -u tlt_cv_samples_v1.1.0.zip  -d ./tlt_cv_samples_v1.1.0 && cd ./tlt_cv_samples_v1.1.0

 

 

 

Start your Jupyter notebook and open it in your browser.

 

 

jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

 

 

 

Step 2 - Open Jupyter notebook and spec file

In the browser, you will see all the CV models that are supported by TLT. For this experiment we will train a 2D body pose model. Click on the “bpnet” model in the Jupyter notebook. In this directory, you will also find Jupyter notebooks for popular networks like YOLOV3/V4, FasterRCNN, SSD, UNET and more. You can follow the same steps to train any other models.

cshah31_6-1625755027642.png

Figure 7 - Model selection from Jupyter

 

Once you are inside, you will find a few config files and specs directory. Spec directory has all the ‘spec’ files to configure training and evaluation parameters. To learn more about all the parameters, refer to the 2D body pose documentation.

cshah31_7-1625755027647.png

Figure 8 - Body Pose estimation training directory

Step 3 - Step thru the guided notebook

Open ‘bpnet.ipynb’ and step through the notebook. In the notebook, you will find learning objectives and all the steps to download the dataset and pre-trained model and run training and optimizing the model. For this exercise, we will use the open source COCO dataset but you are welcome to use your custom body pose dataset. Section 3.2 in the notebook talks about using a custom dataset.

cshah31_8-1625755027655.png

Figure 9 - Jupyter notebook for training

 

 In this blog, we demonstrated a body pose estimation use case with TLT but you can follow the steps to train any Computer Vision or conversational AI model with TLT.  NVIDIA pre-trained models, Transfer Learning Toolkit and GPUs in the Azure cloud simplify the journey and reduce the barrier to starting with AI. The availability of GPUs in Microsoft Azure Cloud allows you to quickly start training without investing in your own hardware infrastructure, allowing you to scale the computing resources based on demand.

By leveraging the pre-trained models and TLT, you can easily and quickly adapt models for your use-cases and develop high-performance models that can be deployed at the edge for real-time inference.

 

Get started today with NVIDIA TAO TLT on Azure Cloud.

 

Resources:

 

%3CLINGO-SUB%20id%3D%22lingo-sub-2528400%22%20slang%3D%22en-US%22%3ETrain%20smarter%20with%20NVIDIA%20pre-trained%20models%20and%20TAO%20Transfer%20Learning%20Toolkit%20on%20Microsoft%20Azure%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2528400%22%20slang%3D%22en-US%22%3E%3CP%3EOne%20of%20the%20many%20challenges%20of%20deploying%20AI%20on%20edge%20is%20that%20IoT%20devices%20have%20limited%20compute%20and%20memory%20resources.%20So%2C%20it%20becomes%20extremely%20important%20that%20your%20model%20is%20accurate%20and%20compact%20enough%20to%20deliver%20real-time%20inference%20at%20the%20edge.%20Juggling%20between%20the%20accuracy%20of%20the%20model%20and%20the%20size%20is%20always%20a%20challenge%20when%20creating%20a%20model%3B%20smaller%2C%20shallower%20networks%20suffer%20from%20poor%20accuracy%20and%20deeper%20networks%20are%20not%20suitable%20for%20edge.%20Additionally%2C%20achieving%20state-of-the-art%20accuracy%20requires%20collecting%20and%20annotating%20large%20sets%20of%20training%20data%20and%20deep%20domain%20expertise%2C%20which%20can%20be%20cost-prohibitive%20for%20many%20enterprises%20looking%20to%20bring%20their%20AI%20solutions%20to%20market%20faster.%26nbsp%3B%20NVIDIA%E2%80%99s%20catalog%20of%20pre-trained%20models%20and%20%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Ftransfer-learning-toolkit%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ETransfer%20Learning%20Toolkit%3C%2FA%3E%20(TLT)%20can%20help%20you%20accelerate%20your%20model%20development.%20TLT%20is%20a%20core%20component%20of%20the%20%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2FTAO%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ENVIDIA%20TAO%2C%20an%20AI-model-adaptation%20platform.%3C%2FA%3E%20TLT%20provides%20a%20simplified%20training%20workflow%20geared%20for%20the%20non-experts%20to%20quickly%20get%20started%20building%20AI%20using%20pre-trained%20models%20and%20guided%20Jupyter%20notebooks.%20TLT%20offers%20several%20performance%20optimizations%20that%20make%20the%20model%20compact%20for%20high%20throughput%20and%20efficiency%2C%20accelerating%20your%20Computer%20Vision%20and%20Conversational%20AI%20applications.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETraining%20is%20compute-intensive%2C%20requiring%20access%20to%20powerful%20GPUs%26nbsp%3B%20to%20speed%20up%20the%20time%20to%20solution.%20Microsoft%20Azure%20Cloud%20offers%20several%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fsizes-gpu%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EGPU%20optimized%20Virtual%20machines%20%3C%2FA%3E(VM)%26nbsp%3B%20with%20access%20to%20NVIDIA%20A100%2C%20V100%20and%20T4%20GPUs.%3C%2FP%3E%0A%3CP%3EIn%20this%20blog%20post%2C%20we%20will%20walk%20you%20through%20the%20entire%20journey%20of%20training%20an%20AI%20model%20starting%20with%20provisioning%20a%20VM%20on%20Azure%20to%20training%20with%20NVIDIA%20TLT%20on%20Azure%20cloud.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--412353838%22%20id%3D%22toc-hId--412356885%22%3EPre-trained%20models%20and%20TLT%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETransfer%20Learning%20is%20a%20training%20technique%20where%20you%20leverage%20the%20learned%20features%20from%20one%20model%20to%20another.%20Start%20with%20a%20pretrained%20model%20that%20has%20been%20trained%20on%20representative%20datasets%20and%20fine-tuned%26nbsp%3B%20with%20weights%20and%20biases.%20These%20models%20can%20be%20easily%20retrained%20with%20custom%20data%20in%20a%20fraction%20of%20the%20time%20it%20takes%20to%20train%20from%20scratch.%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22TLT_on_azure.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294406i222AAA60F1D8E85B%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22TLT_on_azure.png%22%20alt%3D%22TLT_on_azure.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%201%20-%20End-to-end%20AI%20workflow%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20NGC%20catalog%2C%20NVIDIA%E2%80%99s%20hub%20of%20GPU-optimized%20AI%20and%20HPC%20software%20contains%20a%20diverse%20collection%20of%20pre-trained%20models%20for%20%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fcollections%2Fnvidia%3Atltcomputervision%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3Ecomputer%20vision%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fcollections%2Fnvidia%3Atltconversationalai%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3Econversational%20AI%3C%2FA%3E%20use%20cases%20that%20span%20industries%20from%20manufacturing%2C%20to%20retail%20to%20healthcare%20and%20more.%20These%20models%20have%20been%20trained%20on%20images%20and%20large%20sets%20of%20text%20and%20speech%20data%20to%20provide%20you%20with%20a%20highly%20accurate%20model%20to%20start%20with.%20For%20example%2C%20People%20detection%20and%20segmentation%20and%20body%20pose%20estimation%20models%20can%20be%20used%20to%20extract%20occupancy%20insights%20in%20smart%20spaces%20such%20as%20retail%2C%20hospitals%2C%20factories%2C%20offices%2C%20etc.%20Vehicle%20and%20License%20plate%20detection%20and%20recognition%20models%20can%20be%20used%20for%20smart%20infrastructure.%20Automatic%20speech%20recognition%20(ASR)%20and%20Natural%20language%20processing%20(NLP)%20models%20can%20be%20used%20for%20smart%20speakers%2C%20video%20conferencing%2C%20automated%20chatbots%20and%20others.%20In%20addition%20to%20these%20highly%20specific%20use%20case%20models%2C%20you%20also%20have%20the%20flexibility%20to%20use%20the%20general%20purpose%20pre-trained%20models%20from%20popular%20open%20model%20architectures%20such%20as%20ResNet%2C%20EfficientNet%2C%20YOLO%2C%20UNET%2C%20and%20others.%20These%20can%20be%20used%20for%20general%20use%20cases%20in%20object%20detection%2C%20classification%20and%20segmentation.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOnce%20you%20select%20your%20pre-trained%20model%2C%20you%20can%20fine-tune%20the%20model%20on%20your%20dataset%20using%20TLT.%20TLT%20is%20a%20low-code%20Jupyter%20notebook%20based%20workflow%2C%20allowing%20you%20to%20adapt%20an%20AI%20model%20in%20hours%2C%20rather%20than%20months.%20The%20guided%20Jupyter%20notebook%20and%20configurable%20spec%20files%20make%20it%20easy%20to%20get%20started.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHere%20are%20few%20key%20features%20of%20TLT%20to%20optimize%20inference%20performance%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EModel%20pruning%20removes%20nodes%20from%20neural%20networks%20while%20maintaining%20comparable%20accuracy%2C%20making%20the%20model%20compact%20and%20optimal%20for%20edge%20deployment%20without%20sacrificing%20accuracy.%3C%2FLI%3E%0A%3CLI%3EINT8%20quantization%20enables%20the%20model%20to%20run%20inference%20at%20lower%20INT8%20precision%2C%20which%20is%20significantly%20faster%20than%20running%20in%20floating%20point%20FP16%20or%20FP32%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EPruning%20and%20quantization%20can%20be%20achieved%20with%20a%20single%20command%20in%20the%20TLT%20workflow.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-2075158995%22%20id%3D%22toc-hId-2075155948%22%3ESetup%20an%20Azure%20VM%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20start%20by%20first%20setting%20up%20an%20appropriate%20VM%20on%20Azure%20cloud.%20You%20can%20choose%20from%20the%20following%20VMs%20which%20are%20powered%20by%20NVIDIA%20GPUs%20-%20ND%20100%2C%20NCv3%20and%20NC%20T4_v3%20series.%20For%20this%20blog%2C%20we%20will%20use%20the%20NCv3%20series%20which%20comes%20with%20V100%20GPUs.%20For%20the%20base%20image%20on%20the%20VM%2C%20we%20will%20use%20the%20NVIDIA%20provided%20%3CA%20href%3D%22https%3A%2F%2Fazuremarketplace.microsoft.com%2Fen-us%2Fmarketplace%2Fapps%2Fnvidia.ngc_azure_17_11%3Ftab%3DOverview%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EGPU-optimized%20image%20from%20Azure%20marketplace%3C%2FA%3E.%20NVIDIA%20base%20image%20includes%20all%20the%20lower%20level%20dependencies%20which%20reduces%20the%20friction%20of%20installing%20drivers%20and%20other%20prerequisites.%20Here%20are%20the%20steps%20to%20setup%20Azure%20VM%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%201%3C%2FSTRONG%3E%20-%20Pull%20the%20%3CA%20href%3D%22https%3A%2F%2Fazuremarketplace.microsoft.com%2Fen-us%2Fmarketplace%2Fapps%2Fnvidia.ngc_azure_17_11%3Ftab%3DOverview%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EGPU%20optimized%20image%3C%2FA%3E%20from%20Azure%20marketplace%20by%20clicking%20on%20the%20%E2%80%9CGet%20it%20Now%E2%80%9D%20button.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_1-1625755027598.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294395iB424CB973B146AF6%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_1-1625755027598.png%22%20alt%3D%22cshah31_1-1625755027598.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%202%20-%20GPU%20optimized%20image%20on%20Azure%20Marketplace%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ESelect%20the%20v21.04.1%20version%20under%20the%20Software%20plan%20to%20select%20the%20latest%20version.%20This%20will%20have%20the%20latest%20NVIDIA%20drivers%20and%20CUDA%20toolkit.%20Once%20you%20select%20the%20version%2C%20it%20will%20direct%20you%20to%20the%20Azure%20portal%20where%20you%20will%20create%20your%20VM.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_2-1625755027605.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294394iD8EAEC2776A5E985%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_2-1625755027605.png%22%20alt%3D%22cshah31_2-1625755027605.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%203%20-%20Image%20version%20selection%20window%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%202%3C%2FSTRONG%3E%20-%20Configure%20your%20VM%3C%2FP%3E%0A%3CP%3EIn%20the%20Azure%20portal%2C%20click%20%E2%80%9CCreate%E2%80%9D%20to%20start%20configuring%20the%20VM.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_3-1625755027614.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294397i420BD03A979B4D9B%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_3-1625755027614.png%22%20alt%3D%22cshah31_3-1625755027614.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%204%20-%20Azure%20Portal%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20will%20pull%20the%20following%20page%20where%20you%20can%20select%20your%20subscription%20method%2C%20resource%20group%2C%20region%20and%20Hardware%20configuration.%20Provide%20a%20name%20for%20your%20VM.%20Once%20you%20are%20done%20you%20can%20click%20on%20the%20%E2%80%9CReview%20%2B%20Create%E2%80%9D%20button%20at%20the%20end%20to%20do%20a%20final%20review.%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3ENote%3C%2FSTRONG%3E%3A%20The%20default%20disk%20space%20is%2032GB.%20It%20is%20recommended%20to%20use%20%26gt%3B128GB%20disk%20for%20this%20experiment%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_4-1625755027625.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294398iC22A0CB968B433D3%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_4-1625755027625.png%22%20alt%3D%22cshah31_4-1625755027625.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%205%20-%20Create%20VM%20window%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EMake%20the%20final%20review%20of%20the%20offering%20that%20you%20are%20creating.%20Once%20done%2C%20hit%20the%20%E2%80%9CCreate%E2%80%9D%20button%20to%20spin%20up%20your%20VM%20in%20Azure.%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3ENote%3A%20%3C%2FSTRONG%3EOnce%20you%20create%2C%20you%20will%20start%20incurring%20cost%2C%20so%20please%20review%20the%20pricing%20details.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_5-1625755027637.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294399iBCC9D65560912FD7%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_5-1625755027637.png%22%20alt%3D%22cshah31_5-1625755027637.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%206%20-%20VM%20review%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%203%3C%2FSTRONG%3E%20-%20SSH%20in%20to%20your%20VM%3C%2FP%3E%0A%3CP%3EOnce%20your%20VM%20is%20created%2C%20SSH%20into%20your%20VM%20using%20the%20username%20and%20domain%20name%20or%20IP%20address%20of%20your%20VM.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-applescript%22%3E%3CCODE%3Essh%20%3CUSERNAME%3E%40%3CIP%20address%3D%22%22%3E%3C%2FIP%3E%3C%2FUSERNAME%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-267704532%22%20id%3D%22toc-hId-267701485%22%3ETraining%202D%20Body%20Pose%20with%20TLT%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20this%20step%2C%20we%20will%20walk%20through%20the%20steps%20of%20training%20a%20high%20performance%202D%20body%20pose%20model%20with%20TLT.%20This%20is%20a%20fully%20convolutional%20model%20and%20consists%20of%20a%20backbone%20network%2C%20an%20initial%20prediction%20stage%20which%20does%20a%20pixel-wise%20prediction%20of%20confidence%20maps%20(heatmap)%20and%20part-affinity%20fields%20(PAF)%20followed%20by%20multistage%20refinement%20(0%20to%20%3CEM%3EN%3C%2FEM%3E%20stages)%20on%20the%20initial%20predictions.%20This%20model%20is%20further%20optimized%20by%20pruning%20and%20quantization.%20This%20allows%20us%20to%20run%20this%20in%20real-time%20on%20edge%20platforms%20like%20NVIDIA%20Jetson.%3C%2FP%3E%0A%3CP%3EIn%20this%20blog%2C%20we%20will%20focus%20on%20how%20to%20run%20this%20model%20with%20TLT%20on%20Azure%20but%20if%20you%20would%20like%20to%20learn%20more%20about%20the%20model%20architecture%20and%20how%20to%20optimize%20the%20model%2C%20check%20out%20the%20two%20part%20blog%20on%20Training%2FOptimization%202D%20body%20pose%20with%20TLT%20-%20%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fblog%2Ftraining-optimizing-2d-pose-estimation-model-with-tlt-part-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3EPart%201%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fblog%2Ftraining-optimizing-2d-pose-estimation-model-with-tlt-part-2%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3EPart%202%3C%2FA%3E.%20Additional%20information%20about%20this%20model%20can%20be%20found%20in%20the%20%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fmodels%2Fnvidia%3Atlt_bodyposenet%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ENGC%20Model%20card%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%201%20%3C%2FSTRONG%3E-%20Setup%20TLT%3C%2FP%3E%0A%3CP%3EFor%20TLT%2C%20we%20require%20a%20Python%20Virtual%20environment.%20Setup%20the%20Python%20Virtual%20Environment.%20Run%20the%20commands%20below%20to%20set%20up%20the%20Virtual%20environment.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-applescript%22%3E%3CCODE%3Esudo%20su%20-%20root%0Ausermod%20-a%20-G%20docker%20azureuser%0Aapt-get%20-y%20install%20python3-pip%20unzip%0Apip3%20install%20virtualenvwrapper%0Aexport%20VIRTUALENVWRAPPER_PYTHON%3D%2Fusr%2Fbin%2Fpython3%0Asource%20%2Fusr%2Flocal%2Fbin%2Fvirtualenvwrapper.sh%0Amkvirtualenv%20launcher%20-p%20%2Fusr%2Fbin%2Fpython3%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EInstall%20Jupyterlab%20and%20TLT%20Python%20package.%20TLT%20uses%20a%20Python%20launcher%20to%20launch%20training%20runs.%20The%20launcher%20will%20automatically%20pull%20the%20correct%20docker%20image%20from%20NGC%20and%20run%20training%20on%20it.%20Alternatively%2C%20you%20can%20also%20manually%20pull%20the%20docker%20container%20and%20run%20it%20directly%20inside%20the%20docker.%20For%20this%20blog%2C%20we%20will%20run%20it%20from%20the%20launcher.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Epip3%20install%20jupyterlab%0Apip3%20install%20nvidia-pyindex%0Apip3%20install%20nvidia-tlt%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ECheck%20if%20TLT%20is%20installed%20properly.%20Run%20the%20command%20below.%20This%20will%20dump%20a%20list%20of%20AI%20tasks%20that%20are%20supported%20by%20TLT.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Etlt%20info%20--verbose%0A%0AConfiguration%20of%20the%20TLT%20Instance%0A%0Adockers%3A%0A%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20nvcr.io%2Fnvidia%2Ftlt-streamanalytics%3A%0A%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20docker_tag%3A%20v3.0-py3%0A%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20tasks%3A%0A1.%20augment%0A2.%20classification%0A3.%20detectnet_v2%0A4.%20dssd%0A5.%20emotionnet%0A6.%20faster_rcnn%0A7.%20fpenet%0A8.%20gazenet%0A9.%20gesturenet%0A10.%20heartratenet%0A11.%20lprnet%0A12.%20mask_rcnn%0A13.%20retinanet%0A14.%20ssd%0A15.%20unet%0A16.%20yolo_v3%0A17.%20yolo_v4%0A18.%20tlt-converter%0A%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%0A%20%20%20%20%20%20%20%20%20%20nvcr.io%2Fnvidia%2Ftlt-pytorch%3A%0A%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3Bdocker_tag%3A%20v3.0-py3%0A%0Atasks%3A%0A%0A1.%20speech_to_text%0A2.%20text_classification%0A3.%20question_answering%0A4.%20token_classification%0A5.%20intent_slot_classification%0A6.%20punctuation_and_capitalization%0Aformat_version%3A%201.0%0A%0Atlt_version%3A%203.0%0A%0Apublished_date%3A%20mm%2Fdd%2Fyyyy%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELogin%20to%20NGC%20and%20download%20Jupyter%20notebooks%20from%20NGC%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Edocker%20login%20nvcr.io%0A%0Acd%20%2Fmnt%2F%0Asudo%20chown%20azureuser%3Aazureuser%20%2Fmnt%2F%0A%0Awget%20--content-disposition%20https%3A%2F%2Fapi.ngc.nvidia.com%2Fv2%2Fresources%2Fnvidia%2Ftlt_cv_samples%2Fversions%2Fv1.1.0%2Fzip%20-O%20tlt_cv_samples_v1.1.0.zip%0A%0Aunzip%20-u%20tlt_cv_samples_v1.1.0.zip%26nbsp%3B%20-d%20.%2Ftlt_cv_samples_v1.1.0%20%26amp%3B%26amp%3B%20cd%20.%2Ftlt_cv_samples_v1.1.0%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EStart%20your%20Jupyter%20notebook%20and%20open%20it%20in%20your%20browser.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3Ejupyter%20notebook%20--ip%200.0.0.0%20--port%208888%20--allow-root%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%202%20%3C%2FSTRONG%3E-%20Open%20Jupyter%20notebook%20and%20spec%20file%3C%2FP%3E%0A%3CP%3EIn%20the%20browser%2C%20you%20will%20see%20all%20the%20CV%20models%20that%20are%20supported%20by%20TLT.%20For%20this%20experiment%20we%20will%20train%20a%202D%20body%20pose%20model.%20Click%20on%20the%20%E2%80%9Cbpnet%E2%80%9D%20model%20in%20the%20Jupyter%20notebook.%20In%20this%20directory%2C%20you%20will%20also%20find%20Jupyter%20notebooks%20for%20popular%20networks%20like%20YOLOV3%2FV4%2C%20FasterRCNN%2C%20SSD%2C%20UNET%20and%20more.%20You%20can%20follow%20the%20same%20steps%20to%20train%20any%20other%20models.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_6-1625755027642.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294400iE21D8D54E12C8D52%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_6-1625755027642.png%22%20alt%3D%22cshah31_6-1625755027642.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%207%20-%20Model%20selection%20from%20Jupyter%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOnce%20you%20are%20inside%2C%20you%20will%20find%20a%20few%20config%20files%20and%20specs%20directory.%20Spec%20directory%20has%20all%20the%20%E2%80%98spec%E2%80%99%20files%20to%20configure%20training%20and%20evaluation%20parameters.%20To%20learn%20more%20about%20all%20the%20parameters%2C%20refer%20to%20the%20%3CA%20href%3D%22https%3A%2F%2Fdocs.nvidia.com%2Ftlt%2Ftlt-user-guide%2Ftext%2Fbodypose_estimation%2Fbodyposenet.html%23bodyposenet%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3E2D%20body%20pose%20documentation%3C%2FA%3E.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_7-1625755027647.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294401iF8E1B8A59B69AB3F%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_7-1625755027647.png%22%20alt%3D%22cshah31_7-1625755027647.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%208%20-%20Body%20Pose%20estimation%20training%20directory%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EStep%203%3C%2FSTRONG%3E%20-%20Step%20thru%20the%20guided%20notebook%3C%2FP%3E%0A%3CP%3EOpen%20%E2%80%98bpnet.ipynb%E2%80%99%20and%20step%20through%20the%20notebook.%20In%20the%20notebook%2C%20you%20will%20find%20learning%20objectives%20and%20all%20the%20steps%20to%20download%20the%20dataset%20and%20pre-trained%20model%20and%20run%20training%20and%20optimizing%20the%20model.%20For%20this%20exercise%2C%20we%20will%20use%20the%20open%20source%20COCO%20dataset%20but%20you%20are%20welcome%20to%20use%20your%20custom%20body%20pose%20dataset.%20Section%203.2%20in%20the%20notebook%20talks%20about%20using%20a%20custom%20dataset.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22cshah31_8-1625755027655.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294402i99B965F9F2284920%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22cshah31_8-1625755027655.png%22%20alt%3D%22cshah31_8-1625755027655.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EFigure%209%20-%20Jupyter%20notebook%20for%20training%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3BIn%20this%20blog%2C%20we%20demonstrated%20a%20body%20pose%20estimation%20use%20case%20with%20TLT%20but%20you%20can%20follow%20the%20steps%20to%20train%20any%20Computer%20Vision%20or%20conversational%20AI%20model%20with%20TLT.%26nbsp%3B%20NVIDIA%20pre-trained%20models%2C%20Transfer%20Learning%20Toolkit%20and%20GPUs%20in%20the%20Azure%20cloud%20simplify%20the%20journey%20and%20reduce%20the%20barrier%20to%20starting%20with%20AI.%20The%20availability%20of%20GPUs%20in%20Microsoft%20Azure%20Cloud%20allows%20you%20to%20quickly%20start%20training%20without%20investing%20in%20your%20own%20hardware%20infrastructure%2C%20allowing%20you%20to%20scale%20the%20computing%20resources%20based%20on%20demand.%3C%2FP%3E%0A%3CP%3EBy%20leveraging%20the%20pre-trained%20models%20and%20TLT%2C%20you%20can%20easily%20and%20quickly%20adapt%20models%20for%20your%20use-cases%20and%20develop%20high-performance%20models%20that%20can%20be%20deployed%20at%20the%20edge%20for%20real-time%20inference.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EGet%20started%20today%20with%20%3C%2FSTRONG%3E%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Ftransfer-learning-toolkit%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3E%3CSTRONG%3ENVIDIA%20TAO%20TLT%3C%2FSTRONG%3E%3C%2FA%3E%3CSTRONG%3E%20on%20Azure%20Cloud.%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EResources%3A%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Ftransfer-learning-toolkit%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ETLT%20Product%20page%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fcollections%2Fnvidia%3Atltcomputervision%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ECV%20model%20collection%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fcollections%2Fnvidia%3Atltconversationalai%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3EConversational%20AI%20model%20collection%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.nvidia.com%2Ftlt%2Ftlt-user-guide%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ETLT%20documentation%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EVM%20in%20Azure%20documentation%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fvirtual-machines%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EAzure%20VM%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fngc.nvidia.com%2Fcatalog%2Fmodels%2Fnvidia%3Atlt_bodyposenet%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3E2D%20body%20pose%20model%20card%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fblog%2Ftraining-optimizing-2d-pose-estimation-model-with-tlt-part-1%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3E2D%20body%20pose%20training%2Foptimization%20blog%20part%201%3C%2FA%3E%20%7C%20%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fblog%2Ftraining-optimizing-2d-pose-estimation-model-with-tlt-part-2%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3Epart%202%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-2528400%22%20slang%3D%22en-US%22%3E%3CP%3ENVIDIA%20pre-trained%20model%20and%20Transfer%20Learning%20Toolkit%20(TLT)%20allows%20you%20to%20quickly%20train%20and%20optimize%20AI%20on%20Microsoft%20Azure%20Cloud%20without%20requiring%20any%20AI%20expertise.%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22NVidia.png%22%20style%3D%22width%3A%20370px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F294412iFD24C78D7D16C883%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22NVidia.png%22%20alt%3D%22NVidia.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2528400%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20IoT%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EAzure%20IoT%20Edge%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EIoT%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Co-Authors
Version history
Last update:
‎Jul 08 2021 08:59 AM