Blog Post

AI - Machine Learning Blog
3 MIN READ

Deploy PyTorch models with TorchServe in Azure Machine Learning online endpoints

gopalv's avatar
gopalv
Icon for Microsoft rankMicrosoft
Jun 21, 2021

With our recent announcement of support for custom containers in Azure Machine Learning comes support for a wide variety of machine learning frameworks and servers including TensorFlow Serving, R, and ML.NET. In this blog post, we'll show you how to deploy a PyTorch model using TorchServe.

The steps below reference our existing TorchServe sample here.

 

Export your model as a .mar file

To use TorchServe, you first need to export your model in the "Model Archive Repository" (.mar) format. Follow the PyTorch quickstart to learn how to do this for your PyTorch model.

Save your .mar file in a directory called "torchserve."

 

Construct a Dockerfile

In the existing sample, we have a two-line Dockerfile:

 

 

FROM pytorch/torchserve:latest-cpu

CMD ["torchserve","--start","--model-store","$MODEL_BASE_PATH/torchserve","--models","densenet161.mar","--ts-config","$MODEL_BASE_PATH/torchserve/config.properties"]

 

 

Modify this Dockerfile to pass the name of your exported model from the previous step for the "--models" argument.

 

Build an image

Now, build a Docker image from the Dockerfile in the previous step, and store this image in the Azure Container Registry associated with your workspace:

 

 

WORKSPACE=$(az config get --query "defaults[?name == 'workspace'].value" -o tsv)
ACR_NAME=$(az ml workspace show -w $WORKSPACE --query container_registry -o tsv | cut -d'/' -f9-)

if [[ $ACR_NAME == "" ]]
then
    echo "ACR login failed, exiting"
    exit 1
fi

az acr login -n $ACR_NAME
IMAGE_TAG=${ACR_NAME}.azurecr.io/torchserve:8080
az acr build $BASE_PATH/ -f $BASE_PATH/torchserve.dockerfile -t $IMAGE_TAG -r $ACR_NAME

 

 

Test locally

Ensure that you can serve your model by doing a local test. You will need to have Docker installed for this to work. Below, we show you how to run the image, download some sample data, and send a test liveness and scoring request.

 

 

# Run image locally for testing
docker run --rm -d -p 8080:8080 --name torchserve-test \
    -e MODEL_BASE_PATH=$MODEL_BASE_PATH \
    -v $PWD/$BASE_PATH/torchserve:$MODEL_BASE_PATH/torchserve $IMAGE_TAG

# Check Torchserve health
echo "Checking Torchserve health..."
curl http://localhost:8080/ping

# Download test image
echo "Downloading test image..."
wget https://aka.ms/torchserve-test-image -O kitten_small.jpg

# Check scoring locally
echo "Uploading testing image, the scoring is..."
curl http://localhost:8080/predictions/densenet161 -T kitten_small.jpg

docker stop torchserve-test

 

 

Create endpoint YAML

Create a YAML file that specifies the properties of the managed online endpoint you would like to create. In the example below, we specify the location of the model we will use as well as the Azure Virtual Machine size to use when deploying.

 

 

$schema: https://azuremlsdk2.blob.core.windows.net/latest/managedOnlineEndpoint.schema.json
name: torchserve-endpoint
type: online
auth_mode: aml_token
traffic:
  torchserve: 100

deployments:
  - name: torchserve
    model:
      name: torchserve-densenet161
      version: 1
      local_path: ./torchserve
    environment_variables:
      MODEL_BASE_PATH: /var/azureml-app/azureml-models/torchserve-densenet161/1
    environment:
      name: torchserve
      version: 1
      docker:
        image: {{acr_name}}.azurecr.io/torchserve:8080
      inference_config:
        liveness_route:
          port: 8080
          path: /ping
        readiness_route:
          port: 8080
          path: /ping
        scoring_route:
          port: 8080
          path: /predictions/densenet161
    instance_type: Standard_F2s_v2
    scale_settings:
      scale_type: manual
      instance_count: 1
      min_instances: 1
      max_instances: 2

 

 

Create endpoint

Now that you have tested locally and you have a YAML file, you can create your endpoint:

 

 

az ml endpoint create -f $BASE_PATH/$ENDPOINT_NAME.yml -n $ENDPOINT_NAME

 

 

Send a scoring request

Once your endpoint finishes deploying, you can send it unlabeled data for scoring:

 

 

# Get accessToken
echo "Getting access token..."
TOKEN=$(az ml endpoint get-credentials -n $ENDPOINT_NAME --query accessToken -o tsv)

# Get scoring url
echo "Getting scoring url..."
SCORING_URL=$(az ml endpoint show -n $ENDPOINT_NAME --query scoring_uri -o tsv)
echo "Scoring url is $SCORING_URL"

# Check scoring
echo "Uploading testing image, the scoring is..."
curl -H "Authorization: {Bearer $TOKEN}" -T kitten_small.jpg $SCORING_URL

 

 

Delete resources

Now that you have successfully created and tested your TorchServe endpoint, you can delete it.

 

 

# Delete endpoint
echo "Deleting endpoint..."
az ml endpoint delete -n $ENDPOINT_NAME --yes

# Delete model
echo "Deleting model..."
az ml model delete -n $AML_MODEL_NAME --version 1

 

 

Next steps

Read our documentation to learn more and see our other samples.

 

Updated Jun 21, 2021
Version 3.0
No CommentsBe the first to comment