Accelerating AI for COVID-19 on Microsoft Azure Machine Learning using Clara Imaging from NVIDIA NGC
Published Sep 01 2020 09:18 PM 10.7K Views

This post was co-authored by Krishna Anumalasetty, Tom Drabas, Nalini Chandhi from Microsoft and Abhilash Somasamudramath, Manuel Reyes Gomez, Brad Genereaux, Akhil Docca from NVIDIA

 

The following three videos will walk you through the major steps outlined in this blog and will help you in implementing the solution.

  1. Setting up Azure Machine Learning Workspace
  2. Installing the AzureML-NGC Toolkit
  3. Finetuning the pre-trained model

 

Krishna_Anumalasetty_0-1598982910472.png

 

Figure 1. CT-Scan Images Classified for COVID-19 Probability

 

COVID-19 has fundamentally transformed the world we live in. As the scientific community across the globe unites in the face of this pandemic, it is crucial to enable researchers to collaborate and leverage tools that will speed up detection and drug discovery for COVID-19. The power of AI in radiological medical imaging is helping with faster detection, segmentation, and notifications.

 

Leveraging AI for healthcare applications is challenging for many reasons. The complicated science required to build and train deep learning neural networks, setup and maintenance of the supporting infrastructure required to develop, deploy, and manage these applications at scale, pose barriers towards addressing our most pressing healthcare challenges with AI.

 

Cloud computing has enabled researchers with easy access to scalable, on-demand infrastructure to get their AI applications up and running quickly. In the medical imaging space, the real need is for a platform that combines the power of NVIDIA GPUs with a secure environment that also allows easy access to AI software. NVIDIA Clara™ is a full-stack GPU-accelerated healthcare framework accelerating the use of AI for medical research and is available on the NVIDIA NGC Catalog.

 

Azure Machine Learning

Azure Machine Learning (Azure ML) empowers developers, data scientists, machine learning engineers, and AI engineers to build, train, deploy, and manage machine learning models. It is an open platform with built-in support for open-source tools and frameworks, such PyTorch, SciKit Learn and, TensorFlow along with numerous Integrated Developer Environments (IDEs), supporting key  languages like Python and R.

 

Azure ML abstracts the set-up, installation and configuration of the machine learning environment, saving you the hassle of infrastructure management by  taking care of the underlying technicalities. This enables domain experts, such as healthcare researchers and developers, to build mission-critical AI solutions faster and easier. Whether the project requires image classification, object detection, speech analysis, or natural language processing. Azure ML streamlines AI-powered solution development.

 

Productivity is also boosted by accelerating your training jobs with Azure’s global infrastructure. Enhance your workflow by scaling out to multi-node compute clusters or scaling up to powerful GPU-enabled machines. Combining this with end-to-end AI lifecycle management through industry-leading MLOps means data science teams can collaborate better and get to production quicker. And, with more than 60 compliance certifications including FedRAMP High, HIPAA and DISA IL5, plus configurable security features, Azure ML allows you to create a trusted working environment.

 

NVIDIA NGC Catalog and Clara

A key component of the NVIDIA AI ecosystem is the NGC Catalog. It is a software hub of GPU-optimized AI, HPC, and data analytics software built to simplify and accelerate end-to-end workflows. With over 150 enterprise-grade containers, 100+ models, and industry-specific SDKs that can be deployed on-premise, cloud or at the edge, the NGC Catalog enables data scientists and developers to build best-in-class solutions, gather insights, and deliver business value faster. Every single asset in the NGC Catalog is validated for performance, quality, and security by NVIDIA, providing you with the confidence needed to deploy within the Azure ML environment.

 

The deep learning containers in the NGC Catalog are updated and fine-tuned monthly to enable maximum performance through software-driven optimizations that augment GPU hardware acceleration. These performance improvements are made to libraries and runtimes to extract maximum performance from NVIDIA GPUs. Pre-trained models from the NGC Catalog help speed up the application building process. You can find more than 100 pre-trained models across a wide array of applications such as image analysis, natural language processing, speech processing and recommendation systems. The models are curated and tuned to perform optimally on NVIDIA GPUs for maximum performance. By applying transfer learning, you can create custom models by retraining it against your own data.

 

NVIDIA Clara is one many artifacts available in the NGC Catalog. Clara is a healthcare framework that comprises of full-stack GPU-accelerated libraries, SDKs, and reference applications to create secure, and scalable healthcare applications.

 

The Clara family of application frameworks include:

  • Clara Imaging: Application frameworks for accelerating the development and deployment of AI based medical imaging workflows in radiology and pathology, as well as in some medical instruments
  • Clara Parabricks: Computational framework supporting genomic analysis in DNA and RNA
  • Clara Guardian: Application framework and partner ecosystem that simplifies the development and deployment of smart sensors within the hospital with multimodal AI

Clara Imaging, the focus of this blog, offers easy-to-use, domain-optimized tools to create high-quality, labeled datasets, collaborative techniques to train robust AI models, and end-to-end software for scalable and modular AI deployments. It consists of two essential elements – Clara Train and Clara Deploy:

  • Clara Train is a framework that includes two main libraries; AI-Assisted Annotation (AIAA), which enables medical viewers to rapidly create annotated datasets suitable for training, and a Training Framework, a TensorFlow based framework to kick start AI development with techniques like transfer learning, federated learning, and AutoML.
  • Clara Deploy provides a container-based development and deployment framework for multi-AI, multi-domain workflows in smart hospitals for imaging, genomics, and signal processing workloads. It leverages Kubernetes to enable developers and data scientists to define a multi-staged container-based pipeline.   

      

Krishna_Anumalasetty_1-1598982910499.png

 

Figure 2. The entire Clara pipeline that includes Clara train and deploy

 

The entire Clara Framework can be easily accessed from the Clara portfolio page. In addition to Clara Train and Deploy, reference models and pipelines are also available for download. Applying transfer learning, developers can create new models with their own custom dataset.

 

Krishna_Anumalasetty_2-1598982910564.png

 

Figure 3. Various types of pre-trained models that cover both 2D and 3D segmentation and classification for different use cases

 

NGC-AzureML Quick Launch Toolkit

A ready-to-use Jupyter notebook was created to showcase the fine-tuning of a pre-trained COVID-19 CT Scan Classification model from the NGC Catalog.

 

To help automate the deployment, we have developed the NGC-AzureML Quick Launch toolkit that leverages the Azure ML SDK and creates the necessary compute and software resources needed to run Machine Learning applications. The Azure ML SDK uses Azure Machine Learning Compute Clusters, which require their own quota, and can be created using the same mechanism as the process followed to setup a quota for Azure VMs.

 

Azure Machine Learning Workspace is a prerequisite and a video on setting up the Azure Machine Learning workspace can be found at Setting up Azure Machine Learning Workspace. The video that walks through on how to install the AzureML-NGC toolkit described in this section can be found at Installing the AzureML-NGC Toolkit  

 

The toolkit can be used to launch any asset from the NGC Catalog, by just changing a config file. In our example, the toolkit takes relevant assets for Clara, but you can customize it to pull assets related to other uses cases such as computer vision, natural language understanding, and more.

 

The toolkit automates the steps outlined below:

  • An Azure instance with NVIDIA GPUs is configured with the right NVIDIA libraries and drivers
  • Setting the desired NGC Catalog image (Clara Train SDK in this example) onto the Azure instance
  • Uploading additional material: model(s) (the pre-trained COVID-19 CT Scan Classifier in this case), auxiliary code and the corresponding datasets from the NGC Catalog to the Azure instance
  • Loading the ready-to-use Jupyter notebook from the NGC Catalog that contains the application
  • Installing JupyterLab the Azure instance and accessible locally to run the ready-to-use Jupyter notebook

To set up the AzureML environment, you only need to run two commands in the command line interface (CLI):

 

Install azureml-ngc-tools

First, install the NGC-AzureML Quick Launch Toolkit on the local machine, via Pip:

 

 

 

pip install azureml-ngc-tools

 

 

 

Configure azure_config.json

This file contains the Azure credentials and desired choice of instance type. A ready-to-use template for this example to complete can be downloaded here, which should be edited with user credentials.

An example of how the azure_config.json file might look is the following:

 

 

 

{
    "azureml_user":
    {
        "subscription_id": "ab221ca4-f098-XXXXXXXXX-5073b3851e68",
        "resource_group": "TutorialTestA",
        "workspace_name": "TutorialTestA1",
        "telemetry_opt_out": true
    },
    "aml_compute"
    {
        "ct_name":"clara-ct",
        "exp_name":"clara-exp",
        "vm_name":"Standard_NC12s_v3",
        "admin_name": "clara",
        "min_nodes":0,
        "max_nodes":1,
        "vm_priority": "dedicated",
        "idle_seconds_before_scaledown":300,
        "python_interpreter":"/usr/bin/python",
        "conda_packages":["matplotlib","jupyterlab"],
        "environment_name":"clara_env",
        "docker_enabled":true,
        "user_managed_dependencies":true,
        "jupyter_port":9000
    }
}

 

 

 

 

When the above file is used with the azureml-ngc-tools command, an Azure Machine Learning Compute Cluster named “clara-ct” is created using a node from the “Standard_NC12s_v3” VM size. Follow this link to learn more about the other specifications.

 

This file lists references to all the NGC Catalog assets that are to be pre-installed in the Azure ML environment to achieve the specific use-case. In this example, the additional resources are the ready-to-use Jupyter notebook and the associated data files. A ready-to-use config file lists the various assets needed for the application below, can be downloaded here. No additional modifications are required for this file.

 

The ngc_config.json file looks like:

 

 

 

{
    "base_dockerfile":"nvcr.io/nvidia/clara-train-sdk:v3.0",
    "additional_content": {
        "download_content": true,
        "unzip_content": true,
        "upload_content": true,
        "list":[
            {
"url":"https://api.ngc.nvidia.com/v2/models/nvidia/med/clara_train_covid19_3d_ct_classification/versions/1/zip",
                "filename":"clara_train_covid19_3d_ct_classification_1.zip",
                "localdirectory":"clara/experiments/covid19_3d_ct_classification-v2",
                "computedirectory":"clara/experiments/covid19_3d_ct_classification-v2",
                "zipped":true
            },
            {
"url":"https://api.ngc.nvidia.com/v2/models/nvidia/med/clara_train_covid19_3d_ct_classification/versions/1/zip",
                "filename":"clarasdk.zip",
                "localdirectory":"clara",
                "computedirectory":"clara",
                "zipped":true
            },
            {
"url":"https://api.ngc.nvidia.com/v2/resources/nvidia/azuremlclarablogquicklaunch/versions/example/zip",
                "filename":"claractscanexample.zip",
                "localdirectory":"clara/claractscanexample",
                "computedirectory":"clara",
                "zipped":true
            }
        ]
    }
}

 

 

 

 

When used with the azureml-ngc-tools command, the “nvcr.io/nvidia/clara-train-sdk:v3.0” container is pulled from the NGC Catalog and loaded onto the Azure ML Compute Cluster. Additionally, three resources are downloaded and unzipped into the local environment (with the names "filename" and at the relative directories "localdirectory" provided) and then loaded into the Compute Cluster at the provided location ("computedirectory").

Run azureml-ngc-tools

Once the two configuration files are ready, run the azureml-ngc-tools command on the local machine to provision the instance:

 

 

 

azureml-ngc-tools --login azure_config.json --app ngc_config.json

 

 

 

 

Refer to the following screenshot:

Krishna_Anumalasetty_4-1598982910666.png

 

Figure 4. AzureML instance being setup by NGC-AzureML Quick Launch Toolkit

 

The command creates a Compute Cluster "clara-ct" using vmSize :"Standard_NC12s_v3,” then it creates AzureML environment, "clara_env,” with base image “nvcr.io/nvidia/clara-train-sdk:v3.0.” The container is downloaded and subsequently cached after the first use, making it easier for you to reuse the container the next time.

 

Next, the additional NGC Catalog content is downloaded and unzipped locally, which is then uploaded to the Compute Cluster. The user should be able to see the newly created Compute Cluster, "clara-ct", in the Azure Portal under the specified workspace, "TutorialTestA1" in this example and tabs Compute ->Compute clusters. This should look like the following:

 

Krishna_Anumalasetty_5-1598982910675.png

 

Figure 5. Console view of AzureML Environment Setup by NGC-AzureML Quick Launch Toolkit

 

Additional Clara introductory content such as examples of other pre-trained model, datasets and Jupyter notebooks are also downloaded and unzipped locally at the relative directories provided in the “ngc_config.json” file. That content is also uploaded to the Compute Cluster, visible as follows:

 

Krishna_Anumalasetty_6-1598982910684.png

 

Figure 6. Assets from the NGC Catalog pre-loaded onto Azure ML by NGC-AzureML Quick Launch toolkit

 

The command creates and launches JupyterLab in the Compute Cluster and forwards the port to the local machine so that the Jupyter Lab is accessed on your local machine. A URL is generated that contains the link to JupyterLab running on the Azure ML instance setup, along with the assets specified in the “ngc_app.json” file to be used on your local machine.

 

Krishna_Anumalasetty_7-1598982910696.png

 

Figure 7. Direct URL to fully setup and configured Azure ML JupyterLab for this particular use-case

 

Once the you have completed your work on the JupyterLab, the Compute Cluster can be stopped by simply entering CTRL+C on the terminal.

 

The JupyterLab can be launched by copying and pasting the URL into a browser window, as follows:

 

 

Krishna_Anumalasetty_0-1599017677253.png

 

Figure 8. URL opened on any browser window launches ready-to-use Azure ML JupyterLab

 

Note that it starts at the workspace root folder for the provided workspace.

 

To access the content uploaded by the NGC-AzureML Quick Launch Toolkit, you should navigate to the “workspaceblobstore/clara/” folder. All your relevant content will be now available in the session:

 

 

Krishna_Anumalasetty_9-1598982910724.png

 

Figure 9. Overview of NGC assets pre-loaded onto your AzureML JupyterLab for this use-case

 

Fine-Tuning a Model with Clara Imaging and Azure  ML

A video that walks through on how fine tune a model described in this section can be found at Fine Tuning the pre-trained model. To demonstrate how to build and deploy AI for medical imaging using Clara Imaging and Azure ML, we will fine-tune a pre-trained COVID-19 CT Scan Classification model and optimize it for inference with a custom dataset. This pre-trained model was developed by NVIDIA Clara researchers in collaboration with the NIH, which had a repository of CT radiological images from around the world. The NVIDIA pre-trained model reports an accuracy of over 90% on classification of COVID vs. Non-COVID findings from chest CT scans. More information about the training methodology and results achieved by this pre-trained model are contained in a white paper here.

 

The model uses two inputs-- a CT scan image, and a lung segmentation image. A computerized tomography (CT) scan is a 3D medical imaging procedure that uses computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) slices of images. The data needs to be preprocessed, first by converting to Hounsfield units and then rotated to a prescribed orientation, before it can be used for training. A multitude of other pre-trained models developed by NVIDIA Clara for various healthcare applications are made available for free in the NGC Catalog. (N.B. These models are provided for research purposes only.)

 

The following images show one slice of the stack of images that comprise of the patient’s CT and the corresponding mask:

Krishna_Anumalasetty_10-1598982910744.png

Figure 10. Example CT-Scan Slice with the corresponding mask (on the right) to focus the model on the lungs

 

The classifier produces a probability between zero and one, indicating whether the patient has or has not been infected with COVID-19.  However, such predictions require a high degree of accuracy and in most cases, the CT scans used for training may have different characteristics from the original data set used to build the base model. Thus, a model will need to be fine-tuned to achieve the necessary accuracy. NVIDIA Clara includes a fine-tuning mechanism to efficiently adapt the pre-trained model to your dataset, with minimal effort, while increasing the accuracy of your model.

 

The hospital data for the blog is simulated using 40 labelled data from two sources:

 

Running the CovidCT-ScanClassifier.ipynb Jupyter Notebook

 

Krishna_Anumalasetty_11-1598982910746.png

 

Figure 11. The Jupyter notebook for this blog example walks through these steps

 

Step 1: Set Up

The ready-to-use Jupyter notebook first describes the mechanics of how the pre-trained model is built, and how it can be fine-tuned by introducing the concept of a MMAR (Medical Model ARchive). The dataset from Kaggle is downloaded (by previously obtaining the user’s Kaggle key) and the input examples are examined and visualized. The new data is then indexed in a way that Clara can refer to it for different tasks, such as inferring if the patient has COVID-19 or not, fine tuning, and/or deploying the model. Refer to the CovidCT-ScanClassifier.ipynb Jupyter notebook to see how all these tasks are done with more details.

 

The data set that we are using to fine-tune the model consists of 40 samples studies. To test the current accuracy of the model, we’ll use three data points from the set. We’ll exclude these three data points when we eventually fine-tune the model.

 

The Jupyter notebook separates those example as seen here:

 

Krishna_Anumalasetty_12-1598982910756.png

Figure 12. Slicing the reference dataset for training and testing

 

Step 2 Classify/Infer on Test Data

Use the infer.sh command to check the accuracy of the model by using the test data.

The MMAR from the base model has the original configuration files used to train the base model with. Those files need to be adapted to the new data before the infer.sh command is run. The Jupyter notebook will execute all the modifications needed to the required files.

Once the configuration files have been adapted, the infer command is run on the test data.

Inspecting the inference results

The Jupyter Notebook then retrieves the probabilities produced by the “new_infer.sh” command and estimates the predicted labels (1:COVID, 0:NO COVID). Those labels are then used along the true labels to compute the testing examples average precision score using the sklearn.metrics.average_precision_score function.

 

Krishna_Anumalasetty_13-1598982910764.png

 

Figure 13. Testing the pre-trained model for performance without fine-tuning on the new reference dataset

 

Notice that the average precision (0.833) is not as high as expected (.90 or 90% or higher) because some instances of the new data have either not been preprocessed in Hounsfield units or haven’t be rotated to a specific orientation. We can now improve the accuracy of the model by fine-tuning it with the full data set.

 

Step 3 Fine Tune         

The train_finetune.sh command executes the fine-tuning mechanism where the original script and its configuration files are adapted to point to the new training data.

Execute new_train_finetune.sh Command

 

Krishna_Anumalasetty_14-1598982910775.png

 

Krishna_Anumalasetty_15-1598982910798.png

 

Figure 14. Fine-tuning the Clara COVID-19 CT Scan Classification pre-trained model with training set of reference dataset

 

Once the training is complete, you can view the training accuracy, training loss, mean accuracy, and the time it took for the fine-tuning to finish. You can decide to either export the model or continue fine-tuning the model. By default, the resulting model is not automatically saved nor exported. The trained model can be exported using the export.sh command. This produces the frozen graphs necessary for the model to be used by other applications, such as Clara Deploy. The Jupyter notebook sets up the new model so that it can be used to classify the test data.

 

Step 4: Re-test Reclassifying Inference with fine-tuned model

The Jupyter notebook creates a new infer.sh command that points to the full data set and to the new fine-tuned model and then executes the command.

 

The next step is to compute the Average COVID-19 Classification Precision across all testing examples. The Jupyter Notebook then retrieves the probabilities produced by the “finetuned_infer.sh” command and estimates the predicted labels with 1 (high likelihood of infection) or 0 (low likelihood of infection).

 

Those labels are then used along the true labels to compute the testing examples’ average precision score using the sklearn.metrics.average_precision_score function.

 

Krishna_Anumalasetty_16-1598982910808.png

 

Figure 15. Testing the fine-tuned model for performance on the testing slice of data from the new reference dataset

 

Notice that the average precision is now much higher, so the fine-tuning mechanism has succeeded to fine tune the model to account for the peculiarities of the new data.

 

Deploying the Model

After the AI model is exported and checked, the Medical Model Archive MMAR can then be connected into research workflow pipelines. These pipelines contain operators for the various phases of pre-transforms and inference. The results are consumable by a medical imaging ecosystem. (e.g. a DICOM-SR, a secondary capture image with burnt-in results, an HL7 or FHIR message) The imaging pipeline can be constructed using Clara Deploy SDK, which uses building blocks called operators. Clara Deploy SDK comes with many ready-made pipelines for getting started.

 

Understanding and mapping the architecture in any given environment is important to know when constructing this pipeline. For example, simulating the connectivity from an image manager like a PACS  by using transformation frameworks to transform the data for the model, is a good first step. Subsequent to the first step, inference tasks follow, and then delivering the outputs in the format accepted by systems and devices at medical institutions.

 

Krishna_Anumalasetty_17-1598982910809.png

 

Figure 16. A sample deployment workflow using the Clara Deploy application framework

When thinking about the results, it is important to consider the types of data being produced and the systems that it will be deployed on. For instance, is it a classification result (presence or absence of a disease, like COVID-19), a DICOM-SR for display within a PACS, or a FHIR Observation object. If it is a segmentation result (identifying a lesion or nodule), creating a DICOM Segmentation object may be appropriate. These are examples of the types of objects consumable by the medical imaging ecosystem and the architecture of the environment is important to know when constructing this pipeline.

 

Summary

We have shown you how you get started with Clara Train on AzureML for radiological CT images, using a pre-trained AI COVID-19 classification model. This is one example that can be built with the software from the NGC Catalog and AzureML. Beyond just building healthcare centric application, you can use the containers, models, and SDKs from the NGC Catalog to build applications across other use cases, such as conversational AI, recommendation systems, and many more.

Get started today with NVIDIA Clara from the NVIDIA NGC Catalog on Azure ML.

 

 

 

Version history
Last update:
‎Jan 07 2022 01:20 PM
Updated by: