The NDv4 series is very popular for running large deep learning training jobs, which require lots of floating-point performance and high interconnection bandwidth. In this article we will walk through the steps to deploy a complete E2E production ready Ndv4 cluster environment targeting large deep learning training.
The NDv4 cluster environment will consist of the following
The azurehpc github repository is used to deploy the complete NDv4 cluster environment.
Frist, get the azurehpc repository, we will be working primarily in the experimental/deploy_cycle_slurm_ndv4 directory
git clone https://github.com/Azure/azurehpc.git
Deploy the Bastion and a bjumpbox, this will be your landing zone. See examples/bastion for how to deploy this and edit examples/bastion/bastion_ssh_bjumpbox.sh so you will be able to access your bjumpbox directly by running this script.
Note: You will need to have the appropriate authentication to your subscription ID on your bjumpbox (e.g you may need to execute "az login <ARGS>" to authenticate.
From your jumpbox, deploy the prerequisites (VNET, keyvault, peering to bastion landing zone and Azure netapp files). A prereqs.json configuration file is provided (edit file before using).
azhpc-build -c prereqs.json
Note: In the prereqs.json file, uuid variable is just any unique set of characters/numbers that will be part of your key value name (i.e to make sure its unique). Two Azure netapp files volumes are created, one for the User home directories and the other is just an additional azure netapp file volume for apps or data.
azhpc-build --no-vnet -c prereqs_sacct.json
azurehpc will generate the cyclecloud projects defined in the config.json (no container support via pyxis+enroot integration with SLURM) or config_pyxis_enroot.json files, but the scripts referenced in the projects need to be in the deploy_cycle_slurm_ndv4/scripts directory.
cp ../gpu_optimizations/max_gpu_app_clocks.sh scripts
cp ../cc_slurm_nhc/cc_slurm_nhc/specs/default/cluster-init/files/* scripts
cp ../cc_slurm_pyxis_enroot/cc_slurm_pyxis_enroot/specs/default/cluster-init/files/* scripts
Now we deploy the cyclecloud server, cyclecloud locker, generate the cyclecloud projects, upload the cyclecloud projects to the cyclecloud locker and create the NDv4 cluster. Two configuration files have been provided, config.json and config_pyxis_enroot.json.
azhpc-build --no-vnet -c config_pyxis_enroot.json
Note: The variable "projectstore" is the name of the storage account used by cyclecloud to store packages and projects (i.e. the cyclecloud locker), make sure to use a unique name and make sure the "storage" resource (e.g storage account) you are deploying has the same name.
You have 2 options to start the NDv4 cluster, login to the Windows server via bastion and start it via the cyclecloud web portal or login to the jumpbox and start it vis the cyclecloud CLI.
azhpc-connect jumpbox
cyclecloud start_cluster slurmcycle
Now you can login to the login node (login-1) and submit jobs via the SLURM scheduler.
From the jumpbox
cyclecloud connect login-1 -c slurmcycle
You will first need to retrieve the Windows box (winbox) and cyclecloud server password's from the azure keyvault (they were added to the keyvault using the prereqs.json config file). You will also need the cycleserver private IP address.
Fortunately, azurehpc has support to retrieve secrets from azure keyvault.
azhpc-get secret.{{variables.key_vault}}.WinPassord
azhpc-get secret.{{variables.key_vault}}.CycleAdminPassword
azhpc-get ip.cycleserver
Go to the azure portal (to your resource group) and login to winbox via bastion, using "hpcadmin" for the username and the retrieved password. Then from the Windows box, in a browser go to the cycleserver private IP address and use user "hpcadmin" and the retrieved password to access the cycleserver.
Autoscaling was disabled, so all NDV4 nodes in the cluster need to be manually added (up to the maximum number of cores you specified in cyclecloud configuration) or deleted.
You can add or delete nodes via the cyclecloud web portal, but the recommended way is from the scheduler node using the cyclecloud provided scripts.
First login to the scheduler (via the cyclecloud CLI) (from the jumpbox)
cyclecloud connect scheduler -c slurmcycle
Then, from the scheduler, to add a node(s)
sudo /opt/cycle/slurm/resume_program.sh slurmcycle-hpc-pg0-[1-4]
To delete nodes
sudo /opt/cycle/slurm/suspend_progrma.sh slurmcycle-hpc-pg0-1
Moneo: Distributed GPU System Monitoring for AI Workflows based on prometheus and grafana can be easily integrated into this cluster. HPC/AI cluster Monitoring using Azure Monitor if you would prefer a more Azure native approach leveraging Azure monitor service.
The deployment procedure outlined above allows you to quickly deploy a complete production ready NDV4 cluster ideal for large deep learning training jobs. The azurehpc framework is very flexible and allows you to easily customize your deployment.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.