Using Trident to Automate Azure NetApp Files from OpenShift

Published 05-21-2021 03:38 AM 850 Views

Some time ago I wrote this post about different storage options in Azure Red Hat OpenShift. One of the options discussed was using Azure NetApp Files for persistent storage of your pods. As discussed in that post, Azure NetApp Files (ANF) has some advantages:


  • ReadWriteMany support
  • Does not count against the limit of Azure Disks per VM
  • Different performance tiers, the most performant one being 128MiB/s per TiB of volume capacity
  • The NetApp tooling ecosystem

There is one situation where Azure NetApp Files will not be a great fit: if you only need a small share, since the minimum pool size in which Azure NetApp Files can be ordered is 4 TiB. You can carve out many small volumes out of that pool with 4 TiB of capacity, but if the only thing you need is a small share, other options might be more cost effective.


The three different performance tiers of Azure NetApp Files can be very flexible, offering between 16 and 128 MiB/s per provisioned TiB. For example, at 1 TiB a Premium SSD (P30) would give you 200 MiB/s, while a an ANF volume would give you up to 128 MiB/s. Not quite the performance of a Premium SSD, but it doesn’t fall too far behind either.


But’s let’s go back to our post title: what is Trident? In a standard setup, you would have to create the ANF volume manually, and assign it to the different pods that need it. However, with the project Trident NetApp offers the possibility for the Kubernetes clusters of creating and destroying those volumes automatically, tied to the Persistent Volume Claim lifecycle. Hence, when the application is deployed to OpenShift, nobody needs to go to the Azure Portal and provision storage in advance, but the volumes are created using the Kubernetes API, through the functionality of Trident.


As the Trident documentation says, OpenShift is a supported platform. I did not find any blog about whether it would work on Azure RedHat OpenShift (why shouldn’t it?), so I decided to give it a go. I installed Trident on my ARO cluster following this great post by Sean Luce: Azure NetApp Files + Trident, and it was a breeze. You need the client tooling tridentctl, which will do some of the required operations for you (more to this further down).


I created the ANF account and pool with the Azure CLI (Sean is using the Azure Portal). Trident needs a Service Principal to interact with Azure NetApp Files. In my case I am using the cluster SP, to which I granted contributor access for the ANF account:


az netappfiles account create -g $rg -n $anf_name -l $anf_location
az netappfiles pool create -g $rg -a $anf_name -n $anf_name -l $anf_location --size 4 --service-level Standard
anf_account_id=$(az netappfiles account show -n $anf_name -g $rg --query id -o tsv)
az role assignment create --scope $anf_account_id --assignee $sp_app_id --role 'Contributor'

Now you need to install the Trident software (unsurprisingly, Helm is your friend here), and add a “backend”, which will teach Trident how to access that Azure NetApp Files pool you created a minute ago:


# Create ANF backend
# Credits to
subscription_id=$(az account show --query id -o tsv)
tenant_id=$(az account show --query tenantId -o tsv)
cat <<EOF > $trident_backend_file
  "version": 1,
  "storageDriverName": "azure-netapp-files",
  "subscriptionID": "$subscription_id",
  "tenantID": "$tenant_id",
  "clientID": "$sp_app_id",
  "clientSecret": "$sp_app_secret",
  "location": "$anf_location",
  "serviceLevel": "Standard",
  "virtualNetwork": "$vnet_name",
  "subnet": "$anf_subnet_name",
  "nfsMountOptions": "vers=3,proto=tcp,timeo=600",
  "limitVolumeSize": "500Gi",
  "defaults": {
    "exportRule": "",
    "size": "200Gi"
tridentctl -n $trident_ns create backend -f $trident_backend_file


After this, we need a way for OpenShift to use this backend, through standard Kubernetes constructs as any other OpenShift storage technology: with a storage class.


# Create Storage Class
# Credits to
cat <<EOF | kubectl apply -f -
kind: StorageClass
  name: azurenetappfiles
  backendType: "azure-netapp-files"


So OpenShift now will have now 2 storage classes: the default one, which leverages managed Azure Premium disks, plus the new one that has been created to interact with ANF:


$ kubectl get sc
azurenetappfiles        Delete          Immediate              false
managed-premium (default)   Delete          WaitForFirstConsumer   true


And here the magic comes: when a Persistent Volume Claim is created and associated to that storage class, an ANF volume will be instantiated too, matching the parameters specified in the PVC. To create the PVC I will stick to Sean’s example, with a 100 GiB volume:


# Create PVC
# Credits to
cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
  name: azurenetappfiles
    - ReadWriteMany
      storage: 100Gi
  storageClassName: azurenetappfiles


The PVC is now visible in OpenShift:


$ kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
azurenetappfiles   Bound    pvc-adc0348d-5752-4e44-82c2-8205f39c376d   100Gi      RWX            azurenetappfiles   8h

And sure enough, you can use the Azure Portal to browse your account, pool, and the newly created volume:



The Azure CLI will give us as well information about the created volume. The default output was a bit busy and didn’t fit in my screen width, so I created my own set of options that I was interested in:


$ az netappfiles volume list -g $rg -a $anf_name -p $anf_name -o table --query '[].{Name:name, ProvisioningState:provisioningState, ThroughputMibps:throughputMibps, ServiceLevel:serviceLevel, Location:location}'

Name                                                      ProvisioningState    ThroughputMibps    ServiceLevel    Location
--------------------------------------------------------  -------------------  -----------------  --------------  -----------
anf5550/anf5550/pvc-adc0348d-5752-4e44-82c2-8205f39c376d  Succeeded            1.6                Standard        northeurope


Interestingly enough, I couldn’t see the volume size in the object properties, but it can be easily inferred: the volume is Standard, and from the Azure NetApp Files performance tiers we know that Standard means 16 MiB/s per provisioned TiB. Hence, 1.6 MiB/s means 100 GiBs: Maths still work!


I used my sqlapi image, which includes a rudimentary I/O performance benchmark tool based on this code by thodnev, to verify those expected 1.6 MiB/s:


# Deployment
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
  name: $name
    app: $name
    deploymethod: trident
  replicas: 1
      app: $name
        app: $name
        deploymethod: trident
      - name: $name
        image: erjosito/sqlapi:1.0
        - containerPort: 8080
        - name: disk01
          mountPath: /mnt/disk
      - name: disk01
          claimName: azurenetappfiles
apiVersion: v1
kind: Service
    app: $name
  name: $name
  - port: 8080
    protocol: TCP
    targetPort: 8080
    app: $name
  type: LoadBalancer

And here the results of the I/O benchmark (not sure about why the read bandwidth is 10x, I might have a bug in the I/O benchmarking code or there is some caching involved somewhere, I will update this post when I find out more):


❯ curl ''
"Filepath": "/mnt/disk/iotest",
"Read IOPS": 201567.0,
"Read bandwidth in MB/s": 1574.75,
"Read block size (KB)": 8,
"Read blocks": 65536,
"Read time (sec)": 0.33,
"Write IOPS": 13.0,
"Write bandwidth in MB/s": 1.62,
"Write block size (KB)": 128,
"Write time (sec)": 315.51,
"Written MB": 512,
"Written blocks": 4096

When you delete the application from OpenShift, including the PVC, the Azure NetApp Files will disappear as well, without anybody having to log to Azure to do anything.


So that concludes this post, with a boring “it works as expected”. Thanks for reading!

Version history
Last update:
‎May 19 2021 07:06 AM
Updated by: