Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1052536%22%20slang%3D%22en-US%22%3ELustre%20on%20Azure%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1052536%22%20slang%3D%22en-US%22%3E%3CH1%20id%3D%22lustre-on-azure%22%20id%3D%22toc-hId-356563950%22%20id%3D%22toc-hId-356563950%22%3E%3CFONT%20size%3D%227%22%3ELustre%20on%20Azure%3C%2FFONT%3E%3C%2FH1%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CEM%3E%3CSTRONG%3EJanuary%206%2C%202020%3A%3C%2FSTRONG%3E%20This%20content%20was%20recently%20updated%20to%20improve%20readability%20and%20to%20address%20some%20technical%20issues%20that%20were%20pointed%20out%20by%20readers.%20Thanks%20for%20your%20feedback.%20-AzureCAT%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EMicrosoft%20Azure%20has%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-gb%2Fblog%2Fannouncing-the-lv2-series-vms-powered-by-the-amd-epyc-processor%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ELv2%3C%2FA%3E%20virtual%20machine%20(VM)%20instances%20that%20feature%20NVM%20Express%20(NVMe)%20disks%20disks%20for%20use%20in%20a%20Lustre%20filesystem.%20This%20is%20a%20cost-effective%20way%20to%20provision%20a%20high-performance%20filesystem%20on%20Azure.%20The%20disks%20are%20internal%20to%20the%20physical%20host%20and%20don%E2%80%99t%20have%20the%20same%20service-level%20agreement%20(SLA)%20as%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fpricing%2Fdetails%2Fmanaged-disks%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Epremium%20disk%20storage%3C%2FA%3E%2C%20but%20when%20coupled%20with%20a%20hardware%20security%20module%20(HSM)%20they%20are%20a%20fast%20on-demand%2C%20high-performance%20filesystem.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20guide%20outlines%20setting%20up%20a%20Lustre%20filesystem%20and%20PBS%20cluster%20with%20both%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fazurehpc%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzureHPC%3C%2FA%3E%20(scripts%20to%20automate%20deployment%20using%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20CLI%3C%2FA%3E)%20and%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-gb%2Ffeatures%2Fazure-cyclecloud%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20CycleCloud%3C%2FA%3E%2C%20running%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fhpc%2Fior%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EIOR%3C%2FA%3E%20filesystem%20benchmark.%20It%20uses%20the%20HSM%20capabilities%20for%20archival%20and%20backup%20to%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-gb%2Fservices%2Fstorage%2Fblobs%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20Blob%20Storage%3C%2FA%3E%20and%20viewing%20metrics%20in%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-monitor%2Flog-query%2Fget-started-portal%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ELog%20Analytics%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22provisioning-with-azurehpc%22%20id%3D%22toc-hId-1047125424%22%20id%3D%22toc-hId-1047125424%22%3EProvisioning%20with%20AzureHPC%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFirst%2C%20download%20AzureHPC%20by%20running%20the%20following%20Git%20commands%3A%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%3CSPAN%3Egit%20pull%20https%3A%2F%2Fgithub.com%2FAzure%2Fazurehpc.git%3C%2FSPAN%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENext%2C%20set%20up%20the%20environment%20for%20your%20shell%20by%20running%3A%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Esource%20azurehpc%2Finstall.sh%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20This%20%3CCODE%3Einstall.sh%3C%2FCODE%3E%20file%20should%20be%20%22sourced%22%20in%20each%20bash%20session%20where%20you%20want%20to%20run%20the%20%3CCODE%3Eazhpc-*%3C%2FCODE%3E%20commands%20(alternatively%2C%20put%20in%20your%20%3CCODE%3E~%2F.bashrc%3C%2FCODE%3E).%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20AzureHPC%20project%20contains%20a%20Lustre%20filesystem%20and%20a%20PBS%20cluster%20example.%20To%20clone%20this%20example%2C%20run%3A%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Eazhpc-init%20%5C%0A%20%20%20%20-c%20%24azhpc_dir%2Fexamples%2Flustre_combined%20%5C%0A%20%20%20%20-d%20%3CNEW-DIRECTORY-NAME%3E%3C%2FNEW-DIRECTORY-NAME%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20example%20has%20the%20following%20variables%20that%20must%20be%20set%20in%20the%20config%20file%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22height%3A%20251px%3B%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3E%3CSTRONG%3EVariable%3C%2FSTRONG%3E%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3E%3CSTRONG%3EDescription%3C%2FSTRONG%3E%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22odd%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Eresource_group%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20resource%20group%20for%20the%20project%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22even%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Estorage_account%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20storage%20account%20for%20HSM%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22odd%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Estorage_key%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20storage%20key%20for%20HSM%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22even%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Estorage_container%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20container%20to%20use%20for%20HSM%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22odd%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Elog_analytics_lfs_name%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20name%20to%20use%20in%20log%20analytics%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22even%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Elog_analytics_workspace%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20log%20analytics%20workspace%20id%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20class%3D%22odd%22%20style%3D%22height%3A%2029px%3B%22%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20182px%3B%22%3Elog_analytics_key%3C%2FTD%3E%0A%3CTD%20style%3D%22height%3A%2029px%3B%20width%3A%20254px%3B%22%3EThe%20log%20analytics%20key%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20Macros%20exist%20to%20get%20the%20%3CCODE%3Estorage_key%3C%2FCODE%3E%20using%20%3CCODE%3Esakey.%3CSTORAGE-ACCOUNT-NAME%3E%3C%2FSTORAGE-ACCOUNT-NAME%3E%3C%2FCODE%3E%2C%20%3CCODE%3Elog_analytics_workspace%3C%2FCODE%3E%20using%20%3CCODE%3Elaworkspace.%3CRESOURCE-GROUP%3E.%3CWORKSPACE-NAME%3E%3C%2FWORKSPACE-NAME%3E%3C%2FRESOURCE-GROUP%3E%3C%2FCODE%3E%20and%20%3CCODE%3Elog_analytics_key%3C%2FCODE%3E%20using%20%3CCODE%3Elakey.%3CRESOURCE-GROUP%3E.%3CWORKSPACE-NAME%3E%3C%2FWORKSPACE-NAME%3E%3C%2FRESOURCE-GROUP%3E%3C%2FCODE%3E.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%3EOther%20values%20for%20the%20VM%20SKU%20or%20number%20of%20instances%20to%20use.%20This%20example%20has%20a%20headnode%20(D16_v3)%2C%20two%20compute%20nodes%20(D32_v3)%2C%20and%20four%20Lustre%20nodes%20(L32_v2).%20There%20is%20also%20an%20%3CA%20href%3D%22https%3A%2F%2Fazurehpc.azureedge.net%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzurehpc%20web%20tool%3C%2FA%3E%20you%20can%20use%20to%20view%20a%20config%20file%20by%20clicking%20%3CSTRONG%3EOpen%3C%2FSTRONG%3E%20and%20load%20locally%20or%20by%20passing%20a%20URL%2C%20for%20example%2C%20the%20%3CA%20href%3D%22https%3A%2F%2Fazurehpc.azureedge.net%2F%3Fo%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazurehpc%2Fmaster%2Fexamples%2Flustre_combined%2Fconfig.json%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Elustre_combined%3C%2FA%3E%20example.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160398i6362DAD62031A517%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_0.png%22%20title%3D%22clipboard_image_0.png%22%20%2F%3E%3C%2FSPAN%3E%0A%3CP%20class%3D%22caption%22%3E%3CEM%3EFigure%201.%26nbsp%3BAzureHPC%20web%20tool%20showing%20lustre_combined%20example%3C%2FEM%3E%3C%2FP%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3CP%3EOnce%20the%20config%20file%20is%20setup%2C%20run%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Eazhpc-build%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20progress%20is%20displayed%20as%20it%20runs.%20For%20example%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Epaul%40nuc%3A~%2FMicrosoft%2Fazurehpc_projects%2Flustre_test%24%20azhpc-build%20%0AYou%20have%202%20updates%20available.%20Consider%20updating%20your%20CLI%20installation.%0AThu%20%205%20Dec%2010%3A45%3A13%20GMT%202019%20%3A%20Azure%20account%3A%20AzureCAT-TD%20HPC%20(f5a67d06-2d09-4090-91cc-e3298907a021)%0AThu%20%205%20Dec%2010%3A45%3A13%20GMT%202019%20%3A%20creating%20temp%20dir%20-%20azhpc_install_config%0AThu%20%205%20Dec%2010%3A45%3A13%20GMT%202019%20%3A%20creating%20ssh%20keys%20for%20hpcadmin%0AGenerating%20public%2Fprivate%20rsa%20key%20pair.%0AYour%20identification%20has%20been%20saved%20in%20hpcadmin_id_rsa.%0AYour%20public%20key%20has%20been%20saved%20in%20hpcadmin_id_rsa.pub.%0AThe%20key%20fingerprint%20is%3A%0ASHA256%3AsM%2BWb0bByl4EoxrLV6TdkLEADSP%2FMj0w94xIopH034M%20paul%40nuc%0AThe%20key's%20randomart%20image%20is%3A%0A%2B---%5BRSA%202048%5D----%2B%0A%7C%20..%20%2B%2B.%20.o%20%20%20%20%20%20%20%7C%0A%7C...o%20...*.%20%20%20%20%20%20%20%7C%0A%7Co%20..%3D%20o%3D.*%20%20%20%20%20%20%20%7C%0A%7C%20o%20ooB%3D*o%20%3D%20%20%20%20%20%20%7C%0A%7C.%20%20.%2BE*%3DSo%20.%20%20%20%20%20%7C%0A%7C%20%20%20%20%2Bo.%2B%2B.o%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20.%20.%3Do%20%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20%20%20...o%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20%20%20%20%20o.%20%20%20%20%20%20%7C%0A%2B----%5BSHA256%5D-----%2B%0AThu%20%205%20Dec%2010%3A45%3A13%20GMT%202019%20%3A%20creating%20resource%20group%0ALocation%20%20%20%20Name%0A----------%20%20-------------------------%0Awesteurope%20%20paul-azurehpc-lustre-test%0AThu%20%205%20Dec%2010%3A45%3A16%20GMT%202019%20%3A%20creating%20network%0A%0AThu%20%205%20Dec%2010%3A45%3A23%20GMT%202019%20%3A%20creating%20subnet%20compute%0AAddressPrefix%20%20%20%20Name%20%20%20%20%20PrivateEndpointNetworkPolicies%20%20%20%20PrivateLinkServiceNetworkPolicies%20%20%20%20ProvisioningState%20%20%20%20ResourceGroup%0A---------------%20%20-------%20%20--------------------------------%20%20-----------------------------------%20%20-------------------%20%20-------------------------%0A10.2.0.0%2F22%20%20%20%20%20%20compute%20%20Enabled%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Enabled%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Succeeded%20%20%20%20%20%20%20%20%20%20%20%20paul-azurehpc-lustre-test%0AThu%20%205%20Dec%2010%3A45%3A29%20GMT%202019%20%3A%20creating%20subnet%20storage%0AAddressPrefix%20%20%20%20Name%20%20%20%20%20PrivateEndpointNetworkPolicies%20%20%20%20PrivateLinkServiceNetworkPolicies%20%20%20%20ProvisioningState%20%20%20%20ResourceGroup%0A---------------%20%20-------%20%20--------------------------------%20%20-----------------------------------%20%20-------------------%20%20-------------------------%0A10.2.4.0%2F24%20%20%20%20%20%20storage%20%20Enabled%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Enabled%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Succeeded%20%20%20%20%20%20%20%20%20%20%20%20paul-azurehpc-lustre-test%0AThu%20%205%20Dec%2010%3A45%3A35%20GMT%202019%20%3A%20creating%20vmss%3A%20compute%0AThu%20%205%20Dec%2010%3A45%3A40%20GMT%202019%20%3A%20creating%20vm%3A%20headnode%0AThu%20%205%20Dec%2010%3A45%3A46%20GMT%202019%20%3A%20creating%20vmss%3A%20lustre%0AThu%20%205%20Dec%2010%3A45%3A52%20GMT%202019%20%3A%20waiting%20for%20compute%20to%20be%20created%0AThu%20%205%20Dec%2010%3A47%3A24%20GMT%202019%20%3A%20waiting%20for%20headnode%20to%20be%20created%0AThu%20%205%20Dec%2010%3A47%3A26%20GMT%202019%20%3A%20waiting%20for%20lustre%20to%20be%20created%0AThu%20%205%20Dec%2010%3A48%3A28%20GMT%202019%20%3A%20getting%20public%20ip%20for%20headnode%0AThu%20%205%20Dec%2010%3A48%3A29%20GMT%202019%20%3A%20building%20hostlists%0AThu%20%205%20Dec%2010%3A48%3A33%20GMT%202019%20%3A%20building%20install%20scripts%0Arsync%20azhpc_install_config%20to%20headnode0d5c95.westeurope.cloudapp.azure.com%0AThu%20%205%20Dec%2010%3A48%3A42%20GMT%202019%20%3A%20running%20the%20install%20scripts%0AStep%200%20%3A%20install_node_setup.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2020%20seconds%0AStep%201%20%3A%20disable-selinux.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%201%20seconds%0AStep%202%20%3A%20nfsserver.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2032%20seconds%0AStep%203%20%3A%20nfsclient.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2028%20seconds%0AStep%204%20%3A%20localuser.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%202%20seconds%0AStep%205%20%3A%20create_raid0.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2021%20seconds%0AStep%206%20%3A%20lfsrepo.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%201%20seconds%0AStep%207%20%3A%20lfspkgs.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%20221%20seconds%0AStep%208%20%3A%20lfsmaster.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2025%20seconds%0AStep%209%20%3A%20lfsoss.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%205%20seconds%0AStep%2010%20%3A%20lfshsm.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%20134%20seconds%0AStep%2011%20%3A%20lfsclient.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%20117%20seconds%0AStep%2012%20%3A%20lfsimport.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2012%20seconds%0AStep%2013%20%3A%20lfsloganalytics.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%202%20seconds%0AStep%2014%20%3A%20pbsdownload.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%201%20seconds%0AStep%2015%20%3A%20pbsserver.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2061%20seconds%0AStep%2016%20%3A%20pbsclient.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%2013%20seconds%0AStep%2017%20%3A%20addmpich.sh%20(jumpbox_script)%0A%20%20%20%20duration%3A%204%20seconds%0AThu%20%205%20Dec%2011%3A00%3A23%20GMT%202019%20%3A%20cluster%20ready%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOnce%20complete%2C%20you%20can%20connect%20to%20the%20headnode%20with%20the%20following%20command%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Eazhpc-connect%20-u%20hpcuser%20headnode%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22provisioning-with-azure-cyclecloud%22%20id%3D%22toc-hId--760329039%22%20id%3D%22toc-hId--760329039%22%3EProvisioning%20with%20Azure%20CycleCloud%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20section%20walks%20you%20through%20setting%20up%20a%20Lustre%20filesystem%20and%20an%20autoscaling%20PBSPro%20cluster%20where%20the%20Lustre%20client%20is%20set%20up.%26nbsp%3B%26nbsp%3BThis%20process%20uses%20an%20Azure%20CycleCloud%20project%2C%20which%20is%20available%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fedwardsp%2Fcyclecloud-lfs%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22installing-the-lustre-project-and-templates%22%20id%3D%22toc-hId--69767565%22%20id%3D%22toc-hId--69767565%22%3EInstalling%20the%20Lustre%20project%20and%20templates%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThese%20instructions%20assume%20an%20Azure%20CycleCloud%20application%20server%20running%20and%20a%20terminal%20with%20both%20Git%20and%20the%20Azure%20CycleCloud%20CLI%20installed.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFirst%2C%20check%20out%20the%20%3CCODE%3Ecyclecloud-lfs%3C%2FCODE%3E%26nbsp%3Brepository%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Egit%20clone%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fedwardsp%2Fcyclecloud-lfs%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fgithub.com%2Fedwardsp%2Fcyclecloud-lfs%3C%2FA%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20repository%20contains%20the%20Azure%20CycleCloud%20project%20and%20templates.%20There%20is%20an%20%3CCODE%3Elfs%3C%2FCODE%3E%20template%20for%20the%20Lustre%20filesystem%20and%20a%20%3CCODE%3Epbspro-lfs%3C%2FCODE%3E%20template%2C%20which%20is%20a%20modified%20version%20of%20the%20official%20%3CCODE%3Epbspro%3C%2FCODE%3E%20template%20(from%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fcyclecloud-pbspro%2Fblob%2Fmaster%2Ftemplates%2Fpbspro.txt%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E).%20The%20%3CCODE%3Epbspro-lfs%3C%2FCODE%3E%20template%20is%20included%20in%20the%20GitHub%20project%20to%20test%20the%20Lustre%20filesystem.%20Instructions%20for%20adding%20the%20Lustre%20client%20to%20another%20template%20are%20found%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fedwardsp%2Fcyclecloud-lfs%23extending-a-template-to-use-a-lustre-filesystem%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20following%20commands%20upload%20the%20project%20and%20import%20the%20templates%20to%20Azure%20CycleCloud%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Ecd%20cyclecloud-lfs%0Acyclecloud%20project%20upload%20%3CCONTAINER%3E%0Acyclecloud%20import_template%20-f%20templates%2Flfs.txt%0Acyclecloud%20import_template%20-f%20templates%2Fpbspro-lfs.txt%3C%2FCONTAINER%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20Replace%20%3CCODE%3E%3CCONTAINER%3E%3C%2FCONTAINER%3E%3C%2FCODE%3E%20with%20the%20Azure%20CycleCloud%20%22locker%22%20you%20want%20to%20use.%20You%20can%20list%20your%20lockers%20by%20running%20%3CCODE%3Ecyclecloud%20locker%20list%3C%2FCODE%3E.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOnce%20these%20commands%20are%20run%2C%20you%20will%20see%20the%20new%20templates%20in%20your%20Azure%20CycleCloud%20web%20interface%2C%20as%20shown%20in%20Figure%202.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160426iD9FFC087D7DF8580%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22cyclecloud-lfs-templates.png%22%20title%3D%22cyclecloud-lfs-templates.png%22%20%2F%3E%3C%2FSPAN%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3CP%3E%3CEM%3EFigure%202.%26nbsp%3BAzure%20CycleCloud%20web%20interface%3C%2FEM%3E%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--1877222028%22%20id%3D%22toc-hId--1877222028%22%3E%26nbsp%3B%3C%2FH3%3E%0A%3CH3%20id%3D%22creating-the-lustre-cluster%22%20id%3D%22toc-hId-610290805%22%20id%3D%22toc-hId-610290805%22%3ECreating%20the%20Lustre%20Cluster%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E1.%20Create%20the%20%3CCODE%3Elfs%3C%2FCODE%3E%20cluster%20and%20choose%20a%20name%2C%20as%20shown%20in%20Figure%203%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160427i9BF15BDE192805CB%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22cyclecloud-lfs-about.png%22%20title%3D%22cyclecloud-lfs-about.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3E%3CEM%20style%3D%22box-sizing%3A%20border-box%3B%20color%3A%20%23333333%3B%20font-family%3A%20%26amp%3Bquot%3B%20segoeui%26amp%3Bquot%3B%2C%26amp%3Bquot%3Blato%26amp%3Bquot%3B%2C%26amp%3Bquot%3Bhelvetica%20neue%26amp%3Bquot%3B%2Chelvetica%2Carial%2Csans-serif%3B%20font-size%3A%2016px%3B%20font-style%3A%20italic%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EFigure%203.%20Lustre%20Cluster%20%3CSTRONG%3EAbout%3C%2FSTRONG%3E%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENote%3A%20This%20name%20is%20used%20later%20in%20the%20PBS%20cluster%20to%20reference%20this%20filesystem.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E2.%20Click%20%3CSTRONG%3ENext%3C%2FSTRONG%3E%20to%20move%20to%20the%20%3CSTRONG%3ERequired%20Settings%3C%2FSTRONG%3E.%20Here%20you%20can%20choose%20the%20region%20and%20VM%20types.%20Only%20choose%20%3CCODE%3EL_v2%3C%2FCODE%3E%20instance%20type.%20It%20is%20not%20recommended%20to%20go%20beyond%20%3CCODE%3EL32_v2%3C%2FCODE%3E%20as%20the%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Flinux%2Fsizes-storage%23lsv2-series%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Enetwork%20thoughput%3C%2FA%3E%20does%20not%20scale%20linearly%20beyond%20this%20size.%20All%20NVME%20disks%20are%20combined%20in%20a%20RAID%200%20for%20the%20OST%20in%20the%20virtual%20machine.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160401iF0558BF88829D8C6%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_3.png%22%20title%3D%22clipboard_image_3.png%22%20%2F%3E%3C%2FSPAN%3E%0A%3CP%20class%3D%22caption%22%3E%3CEM%3EFigure%204.%26nbsp%3BLustre%20Cluster%20%3CSTRONG%3ERequired%20Settings%3C%2FSTRONG%3E%3C%2FEM%3E%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E3.%20Choose%20the%20%3CSTRONG%3EBase%20OS%3C%2FSTRONG%3E%20in%20%3CSTRONG%3EAdvanced%20Settings%3C%2FSTRONG%3E.%20This%20determines%20which%20version%20of%20Lustre%20to%20use.%20The%20scripts%20are%20set%20up%20to%20use%20the%20%3CA%20href%3D%22https%3A%2F%2Fdownloads.whamcloud.com%2Fpublic%2Flustre%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EWhamcloud%3C%2FA%3E%20repository%20for%20Lustre%2C%20so%20RPMs%20for%20Lustre%202.10%20are%20only%20available%20up%20to%20CentOS%207.6%2C%20and%20Lustre%202.12%20is%20available%20for%20CentOS%207.7.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20Both%20the%20server%20and%20client%20Lustre%20versions%20need%20to%20match.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CDIV%20class%3D%22figure%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160402i0DBA36A5AE8599B5%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_4.png%22%20title%3D%22clipboard_image_4.png%22%20%2F%3E%3C%2FSPAN%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3CP%3E%3CEM%3EFigure%205.%20Lustre%20Cluster%20%3CSTRONG%3EAdvanced%20Settings%3C%2FSTRONG%3E%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E4.%20In%20%3CSTRONG%3ELustre%20Settings%3C%2FSTRONG%3E%2C%20you%20can%20choose%20the%20%3CSTRONG%3ELustre%20version%3C%2FSTRONG%3E%20and%20number%20of%20%3CSTRONG%3EAdditional%20OSS%20nodes%3C%2FSTRONG%3E.%20The%20number%20of%20OSS%20nodes%20chosen%20here%20and%20can't%20be%20modified%20without%20recreating%20the%20filesystem.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E5.%20To%20use%20HSM%2C%20enable%20the%20checkbox%20and%20provide%20details%20for%20a%20%3CSTRONG%3EStorage%20Account%3C%2FSTRONG%3E%2C%20%3CSTRONG%3EStorage%20Key%3C%2FSTRONG%3E%2C%20and%20%3CSTRONG%3EStorage%20Container%3C%2FSTRONG%3E.%20All%20files%20selected%20in%20the%20container%20are%20imported%20into%20Lustre%20when%20the%20filesystem%20is%20started.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20This%20only%20populates%20the%20metadata%20and%20files%20are%20downloaded%20on-demand%20as%20they%20are%20accessed.%20Alternatively%2C%20they%20can%20be%20restored%20using%20the%20%3CCODE%3Elfs%20hsm_restore%3C%2FCODE%3E%20command.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E6.%20To%20use%20Log%20Analytics%2C%20enable%20the%20checkbox%20and%20provide%20details%20for%20the%20%3CSTRONG%3EName%3C%2FSTRONG%3E%2C%20%3CSTRONG%3ELog%20Analytics%20Workspace%3C%2FSTRONG%3E%2C%20and%20%3CSTRONG%3ELog%20Analytics%20Key%3C%2FSTRONG%3E.%20The%20%3CSTRONG%3EName%20%3C%2FSTRONG%3Eis%20the%20log%20name%20to%20use%20for%20the%20metrics.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160403i5A1DD91E84E29E82%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_5.png%22%20title%3D%22clipboard_image_5.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CEM%3EFigure%206.%20Lustre%20Cluster%20%3CSTRONG%3EAdvanced%20Settings%3C%2FSTRONG%3E%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EClick%20%3CSTRONG%3ESave%20%3C%2FSTRONG%3Eand%20start%20the%20cluster.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22creating-the-pbs-cluster%22%20id%3D%22toc-hId--1197163658%22%20id%3D%22toc-hId--1197163658%22%3ECreating%20the%20PBS%20Cluster%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20test%20the%20Lustre%20filesystem%2C%20create%20a%20%3CCODE%3Epbspro-lfs%3C%2FCODE%3E%20cluster%20as%20follows%3A%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3EName%20the%20cluster%2C%20select%20the%20region%2C%20SKUs%2C%20and%20autoscale%20settings%2C%20and%20choose%20a%20subnet%20with%20access%20to%20the%20Lustre%20cluster.%3C%2FLI%3E%0A%3CLI%3EIn%20the%20%3CSTRONG%3EAdvanced%20Settings%3C%2FSTRONG%3E%20make%20sure%20you%20know%20which%20version%20of%20CentOS%20you%20are%20using.%20At%20the%20time%20of%20writing%2C%20%3CCODE%3ECycle%20CentOS%207%3C%2FCODE%3E%20is%20version%207.6%2C%20but%20you%20may%20want%20to%20explicitly%20set%20the%20version%20with%20a%20custom%20image%20as%20the%20Azure%20CycleCloud%20version%20may%20be%20updated.%3C%2FLI%3E%0A%3CLI%3EIn%20%3CSTRONG%3ELustre%20Settings%3C%2FSTRONG%3E%2C%20choose%20from%20the%20available%20Lustre%20clusters%20in%20the%20dropdown%20menu.%3C%2FLI%3E%0A%3CLI%3E%20%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20cursor%3A%20text%3B%20font-family%3A%20inherit%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20line-height%3A%201.7142%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EMake%20sure%20the%20%3C%2FSPAN%3E%3CSTRONG%3ELustre%20Version%3C%2FSTRONG%3E%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20cursor%3A%20text%3B%20font-family%3A%20inherit%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20line-height%3A%201.7142%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3E%20is%20correct%20for%20the%20OS%20that%20is%20chosen%20and%20check%20that%20it%20matches%20it%20matches%20the%20Luster%20cluster.%20%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20cursor%3A%20text%3B%20font-family%3A%20inherit%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20line-height%3A%201.7142%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EChoose%20the%20path%20for%20Lustre%20to%20be%20mounted%20on%20all%20the%20clients%20and%20click%20%3CSTRONG%3ESave%3C%2FSTRONG%3E.%26nbsp%3B%3C%2FSPAN%3E%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160405iD8ED5FA18C8F9983%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_6.png%22%20title%3D%22clipboard_image_6.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CEM%3E%26nbsp%3B%20%26nbsp%3B%26nbsp%3B%20Figure%207.%20PBSPro-lfs%20Cluster%20%3CSTRONG%3EAdvanced%20Settings%3C%2FSTRONG%3E%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOnce%20the%20Lustre%20Cluster%20is%20running%2C%20you%20can%20start%20this%20cluster.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22lustre-performance%22%20id%3D%22toc-hId--1207666762%22%20id%3D%22toc-hId--1207666762%22%3ELustre%20performance%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe're%20using%20%3CCODE%3Eior%3C%2FCODE%3E%20to%20test%20the%20performance.%20Either%20the%20AzureHPC%20or%20CycleCloud%20version%20can%20be%20used%2C%20but%20the%20commands%20change%20slightly%20depending%20on%20the%20image%20and%20OS%20version%20used.%20The%20following%20commands%20relate%20to%20the%20%3CCODE%3Elustre_combined%3C%2FCODE%3E%20AzureHPC%20example.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFirst%2C%20connect%20to%20the%20headnode%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Eazhpc-connect%20-u%20hpcuser%20headnode%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20are%20compiling%20%3CCODE%3Eior%3C%2FCODE%3E%2C%20and%20this%20requires%20the%20MPI%20compiler%20on%20the%20headnode%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Esudo%20yum%20-y%20install%20mpich-devel%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENow%2C%20download%20and%20compile%20%3CCODE%3Eior%3C%2FCODE%3E%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Emodule%20load%20mpi%2Fmpich-3.0-x86_64%0Awget%20https%3A%2F%2Fgithub.com%2Fhpc%2Fior%2Freleases%2Fdownload%2F3.2.1%2Fior-3.2.1.tar.gz%0Atar%20zxvf%20ior-3.2.1.tar.gz%0Acd%20ior-3.2.1%0A.%2Fconfigure%20--prefix%3D%24HOME%2Fior%0Amake%0Amake%20install%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EMove%20to%20the%20lustre%20filesystem%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Ecd%20%2Flustre%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ECreate%20a%20PBS%20job%20file.%20For%20example%2C%20%3CCODE%3Erun_ior.pbs%3C%2FCODE%3E%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%23!%2Fbin%2Fbash%0A%0Asource%20%2Fetc%2Fprofile%0Amodule%20load%20mpi%2Fmpich-3.0-x86_64%0A%0Acd%20%24PBS_O_WORKDIR%0A%0ANP%3D%24(wc%20-l%20%26lt%3B%24PBS_NODEFILE)%0ANODES%3D%24(sort%20-u%20%24PBS_NODEFILE%20%7C%20wc%20-l)%0APPN%3D%24((%24NP%20%2F%20%24NODES))%0A%0ATIMESTAMP%3D%24(date%20%2B%22%25Y-%25m-%25d_%25H-%25M-%25S%22)%0A%0Ampirun%20-np%20%24NP%20-machinefile%20%24PBS_NODEFILE%20%5C%0A%20%20%20%20%24HOME%2Fior%2Fbin%2Fior%20-a%20POSIX%20-v%20-z%20-i%201%20-m%20-d%201%20-B%20-e%20-F%20-r%20-w%20-t%2032m%20-b%204G%20%5C%0A%20%20%20%20-o%20%24PWD%2Ftest.%24TIMESTAMP%20%5C%0A%20%20%20%20%7C%20tee%20ior-%24%7BNODES%7Dx%24%7BPPN%7D.%24TIMESTAMP.log%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ESubmit%20an%20%3CCODE%3Eior%3C%2FCODE%3E%20benchmark%20as%20follows%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Eclient_nodes%3D2%0Aprocs_per_node%3D32%0Aqsub%20-lselect%3D%24%7Bclient_nodes%7D%3Ancpus%3D%24%7Bprocs_per_node%7D%3Ampiprocs%3D%24%7Bprocs_per_node%7D%2Cplace%3Dscatter%3Aexcl%20run_ior.pbs%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFigure%208%20shows%20the%20results%20of%20testing%20the%20bandwidth%20of%20the%20Lustre%20filesystem%2C%20scaling%20from%201%20to%2016%20OSS%20VMs.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160404iF507C9F03CB92A88%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_7.png%22%20title%3D%22clipboard_image_7.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CEM%3EFigure%208.%20IOR%20benchmark%20results%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20each%20run%2C%20the%20same%20number%20of%20client%20VMs%20were%20used%20as%20there%20are%20OSS%20VMs%20and%2032%20processes%20were%20run%20on%20each%20client%20VM.%20Each%20client%20VM%20is%20a%20%3CCODE%3ED32_v3%3C%2FCODE%3E%2C%20which%20has%20expected%20bandwidth%20of%2016%2C000%20Mbps%20(see%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Flinux%2Fsizes-general%23dsv3-series-1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E)%20and%20each%20OSS%20VM%20is%20an%20%3CCODE%3EL32_v2%3C%2FCODE%3E%2C%20which%20has%20the%20expected%20bandwidth%20of%2012%2C800%20Mbps%20(see%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Flinux%2Fsizes-storage%23lsv2-series%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E).%20This%20means%20that%20a%20single%20client%20should%20be%20able%20to%20saturate%20the%20bandwidth%20of%20one%20OSS.%20The%20max%20network%20is%20the%20expected%20bandwidth%20from%20the%20OSS%20multiplied%20by%20the%20number%20of%20OSS%20VMs.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22using-hsm%22%20id%3D%22toc-hId-1279846071%22%20id%3D%22toc-hId-1279846071%22%3EUsing%20HSM%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20AzureHPC%20examples%20and%20the%20Azure%20CycleCloud%20templates%20set%20up%20HSM%20on%20Lustre%20and%20import%20the%20storage%20container%20when%20the%20filesystem%20is%20created.%20Only%20the%20metadata%20is%20read%2C%20so%20files%20are%20downloaded%20on-demand%20as%20they%20are%20accessed.%20But%2C%20other%20than%20on-demand%20downloads%2C%20all%20the%20other%20commands%20for%20archival%20are%20not%20automatic.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20copytool%20for%20Azure%20is%20available%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fedwardsp%2Flemur%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%20This%20copytool%20supports%20users%2C%20groups%2C%20and%20UNIX%20file%20permissions%20that%20are%20added%20as%20meta-data%20to%20the%20files%20stored%20in%20Azure%20Blob%20storage.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22hsm-commands%22%20id%3D%22toc-hId-1970407545%22%20id%3D%22toc-hId-1970407545%22%3EHSM%20commands%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20HSM%20actions%20are%20available%20with%20the%20%3CCODE%3Elfs%3C%2FCODE%3E%20command.%20All%20the%20commands%20that%20follow%20work%20with%20multiple%20files%20as%20arguments.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH4%20id%3D%22achive%22%20id%3D%22toc-hId--403127678%22%20id%3D%22toc-hId--403127678%22%3EArchive%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20%3CCODE%3Elfs%20hsm_archive%3C%2FCODE%3E%20command%20copies%20the%20file%20to%20Azure%20Blob%20storage.%20Example%20usage%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%24%20sudo%20lfs%20hsm_archive%20myfile%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CH4%20id%3D%22toc-hId-2084385155%22%20id%3D%22toc-hId-2084385155%22%3E%26nbsp%3B%3C%2FH4%3E%0A%3CH4%20id%3D%22release%22%20id%3D%22toc-hId-276930692%22%20id%3D%22toc-hId-276930692%22%3ERelease%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20%3CCODE%3Elfs%20hsm_release%3C%2FCODE%3E%20command%20releases%20an%20archived%20file%20from%20the%20Lustre%20filesystem.%20It%20no%20longer%20takes%20up%20space%20in%20Lustre%2C%20but%20it%20still%20appears%20in%20the%20filesystem.%20When%20opened%2C%20it's%20downloaded%20again.%20Example%20usage%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%24%20sudo%20lfs%20hsm_release%20myfile%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CH4%20id%3D%22toc-hId--1530523771%22%20id%3D%22toc-hId--1530523771%22%3E%26nbsp%3B%3C%2FH4%3E%0A%3CH4%20id%3D%22remove%22%20id%3D%22toc-hId-956989062%22%20id%3D%22toc-hId-956989062%22%3ERemove%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20%3CCODE%3Elfs%20hsm_remove%3C%2FCODE%3E%20command%20deletes%20an%20archived%20file%20from%20the%20archive.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CSPAN%3E%24%20sudo%20lfs%20hsm_remove%20myfile%3C%2FSPAN%3E%3C%2FPRE%3E%0A%3CH4%20id%3D%22toc-hId--850465401%22%20id%3D%22toc-hId--850465401%22%3E%26nbsp%3B%3C%2FH4%3E%0A%3CH4%20id%3D%22state%22%20id%3D%22toc-hId-1637047432%22%20id%3D%22toc-hId-1637047432%22%3EState%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20%3CCODE%3Elfs%20hsm_state%3C%2FCODE%3E%20command%20shows%20the%20state%20of%20the%20file%20in%20the%20filesystem.%20This%20is%20output%20for%20a%20file%20that%20isn't%20archived%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%24%20sudo%20lfs%20hsm_state%20myfile%20%0Amyfile%3A%20(0x00000000)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20is%20output%20for%20a%20file%20that%20is%20archived%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%24%20sudo%20lfs%20hsm_state%20myfile%20%0Amyfile%3A%20(0x0000000d)%20exists%20archived%2C%20archive_id%3A1%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20is%20output%20for%20a%20file%20that%20is%20archived%20and%20released%20(that%20is%2C%20in%20storage%20but%20not%20taking%20up%20space%20in%20the%20filesystem)%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%24%20sudo%20lfs%20hsm_state%20myfile%20%0Amyfile%3A%20(0x0000000d)%20released%20exists%20archived%2C%20archive_id%3A1%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CH4%20id%3D%22toc-hId--170407031%22%20id%3D%22toc-hId--170407031%22%3E%26nbsp%3B%3C%2FH4%3E%0A%3CH4%20id%3D%22action%22%20id%3D%22toc-hId--1977861494%22%20id%3D%22toc-hId--1977861494%22%3EAction%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20%3CCODE%3Elfs%20hsm_action%3C%2FCODE%3E%20command%20displays%20the%20current%20HSM%20request%20for%20a%20given%20file.%20This%20is%20most%20useful%20when%20checking%20the%20progress%20on%20files%20being%20archived%20or%20restored.%20When%20there%20is%20no%20ongoing%20or%20pending%20HSM%20request%2C%20it%20displays%20%3CCODE%3ENOOP%3C%2FCODE%3E%20for%20the%20file.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22rehydrating-the-whole-filesystem-from-blob-storage%22%20id%3D%22toc-hId-380568620%22%20id%3D%22toc-hId-380568620%22%3ERehydrating%20the%20whole%20filesystem%20from%20blob%20storage%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20certain%20cases%2C%20you%20may%20want%20to%20restore%20all%20the%20released%20(or%20imported)%20files%20into%20the%20filesystem.%20This%20is%20best%20used%20in%20cases%20where%20all%20the%20files%20are%20required%20and%20you%20don't%20want%20the%20application%20to%20wait%20for%20each%20file%20to%20be%20retrieved%20separately.%20This%20can%20be%20started%20with%20the%20following%20command%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Ecd%20%3CLUSTRE_ROOT%3E%0Afind%20.%20-type%20f%20-print0%20%7C%20xargs%20-r0%20-L%2050%20sudo%20lfs%20hsm_restore%3C%2FLUSTRE_ROOT%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20progress%20of%20the%20files%20can%20be%20checked%20with%20%3CCODE%3Esudo%20lfs%20hsm_action%3C%2FCODE%3E.%20To%20find%20out%20how%20many%20files%20are%20left%20to%20be%20restored%2C%20use%20the%20following%20command%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3Ecd%20%3CLUSTRE_ROOT%3E%0Afind%20.%20-type%20f%20-print0%20%5C%0A%20%20%20%20%7C%20xargs%20-r0%20-L%2050%20sudo%20lfs%20hsm_restore%20%5C%0A%20%20%20%20%7C%20grep%20-v%20NOOP%20%5C%0A%20%20%20%20%7C%20wc%20-l%3C%2FLUSTRE_ROOT%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22viewing-lustre-metrics-in-log-analytics%22%20id%3D%22toc-hId--857806621%22%20id%3D%22toc-hId--857806621%22%3EViewing%20Lustre%20metrics%20in%20Log%20Analytics%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EEach%20Lustre%20VM%20logs%20the%20following%20metrics%20every%20sixty%20seconds%20if%20log%20analytics%20is%20enabled%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ELoad%20average%3C%2FLI%3E%0A%3CLI%3EKilobytes%20free%3C%2FLI%3E%0A%3CLI%3ENetwork%20bytes%20sent%3C%2FLI%3E%0A%3CLI%3ENetwork%20bytes%20received%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EYou%20can%20view%20this%20in%20the%20portal%20by%20selecting%20%3CSTRONG%3EMonitor%20%3C%2FSTRONG%3Eand%20then%20%3CSTRONG%3ELogs%3C%2FSTRONG%3E.%20Here%20is%20an%20example%20query%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3E%3CCODE%3E%3CLOG-NAME%3E_CL%0A%7C%20summarize%20max(loadavg_d)%2Cmax(bytessend_d)%2Cmax(bytesrecv_d)%20by%20bin(TimeGenerated%2C1m)%2C%20hostname_s%0A%7C%20render%20timechart%3C%2FLOG-NAME%3E%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CBLOCKQUOTE%3E%0A%3CP%3ENote%3A%20Substitute%20%3CCODE%3E%3CLOG-NAME%3E%3C%2FLOG-NAME%3E%3C%2FCODE%3E%20for%20the%20name%20you%20chose.%3C%2FP%3E%0A%3C%2FBLOCKQUOTE%3E%0A%3CDIV%20class%3D%22figure%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CDIV%20class%3D%22figure%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20991px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F160406i15DF41DFD757474E%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_8.png%22%20title%3D%22clipboard_image_8.png%22%20%2F%3E%3C%2FSPAN%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22caption%22%3E%3CEM%3EFigure%209.%26nbsp%3BLog%20Analytics%20data%3C%2FEM%3E%3C%2FP%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20class%3D%22caption%22%20id%3D%22toc-hId-1629706212%22%20id%3D%22toc-hId-1629706212%22%3ESummary%3C%2FH2%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22caption%22%3EThis%20post%20outlined%20setting%20up%20a%20Lustre%20filesystem%20and%20PBS%20cluster%20using%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fazurehpc%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzureHPC%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-gb%2Ffeatures%2Fazure-cyclecloud%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20CycleCloud%3C%2FA%3E.%20It%20uses%20the%20HSM%20capabilities%20for%20archival%20and%20backup%20to%20Azure%20Blob%20Storage%20and%20viewing%20metrics%20in%20Log%20Analytics.%20This%20is%20a%20cost-effective%20way%20to%20provision%20a%20high-performance%20filesystem%20on%20Azure.%3C%2FP%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20class%3D%22caption%22%20id%3D%22toc-hId--177748251%22%20id%3D%22toc-hId--177748251%22%3ELearn%20more%3C%2FH2%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2FAzureCAT%2FAzureCAT-eBook-Parallel-Virtual-File-Systems-on-Microsoft-Azure%2Fba-p%2F306470%22%20target%3D%22_blank%22%20rel%3D%22noopener%22%3EAzureCAT%20eBook%3A%20Parallel%20Virtual%20File%20Systems%20on%20Microsoft%20Azure%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2FAzureCAT%2FParallel-Virtual-File-Systems-on-Microsoft-Azure-Part-2-Lustre%2Fba-p%2F306524%22%20target%3D%22_blank%22%20rel%3D%22noopener%22%3EParallel%20Virtual%20File%20Systems%20on%20Microsoft%20Azure%20-%20Part%202%3A%20Lustre%20on%20Azure%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%20class%3D%22caption%22%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1052536%22%20slang%3D%22en-US%22%3E%3CP%3EA%20guide%20to%20running%20Lustre%20on%20Azure.%26nbsp%3B%20Covering%20provisioning%20with%20both%20Azure%20CycleCloud%20and%20Azure%20CLI%2C%20benchmarking%20with%20IOR%2C%20importing%20and%20backing%20up%20to%20Azure%20BLOB%20storage%20with%20HSM%20and%20monitoring%20with%20Log%20Analytics.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1052536%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Eazhpc%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eazurehpc%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ecyclecloud%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Efilesystem%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EHPC%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EIOR%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Elustre%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EPBS%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

Lustre on Azure

 

January 6, 2020: This content was recently updated to improve readability and to address some technical issues that were pointed out by readers. Thanks for your feedback. -AzureCAT

 

Microsoft Azure has Lv2 virtual machine (VM) instances that feature NVM Express (NVMe) disks disks for use in a Lustre filesystem. This is a cost-effective way to provision a high-performance filesystem on Azure. The disks are internal to the physical host and don’t have the same service-level agreement (SLA) as premium disk storage, but when coupled with a hardware security module (HSM) they are a fast on-demand, high-performance filesystem.

 

This guide outlines setting up a Lustre filesystem and PBS cluster with both AzureHPC (scripts to automate deployment using the Azure CLI) and Azure CycleCloud, running the IOR filesystem benchmark. It uses the HSM capabilities for archival and backup to Azure Blob Storage and viewing metrics in Log Analytics.

 

Provisioning with AzureHPC

 

First, download AzureHPC by running the following Git commands:

git pull https://github.com/Azure/azurehpc.git

 

Next, set up the environment for your shell by running:

source azurehpc/install.sh

 

Note: This install.sh file should be "sourced" in each bash session where you want to run the azhpc-* commands (alternatively, put in your ~/.bashrc).

 

The AzureHPC project contains a Lustre filesystem and a PBS cluster example. To clone this example, run:

azhpc-init \
    -c $azhpc_dir/examples/lustre_combined \
    -d <new-directory-name>

 

The example has the following variables that must be set in the config file:

 

Variable Description
resource_group The resource group for the project
storage_account The storage account for HSM
storage_key The storage key for HSM
storage_container The container to use for HSM
log_analytics_lfs_name The name to use in log analytics
log_analytics_workspace The log analytics workspace id
log_analytics_key The log analytics key

 

Note: Macros exist to get the storage_key using sakey.<storage-account-name>, log_analytics_workspace using laworkspace.<resource-group>.<workspace-name> and log_analytics_key using lakey.<resource-group>.<workspace-name>.

Other values for the VM SKU or number of instances to use. This example has a headnode (D16_v3), two compute nodes (D32_v3), and four Lustre nodes (L32_v2). There is also an Azurehpc web tool you can use to view a config file by clicking Open and load locally or by passing a URL, for example, the lustre_combined example.

 

clipboard_image_0.png

Figure 1. AzureHPC web tool showing lustre_combined example

 

Once the config file is setup, run:

 

azhpc-build

 

The progress is displayed as it runs. For example:

 

paul@nuc:~/Microsoft/azurehpc_projects/lustre_test$ azhpc-build 
You have 2 updates available. Consider updating your CLI installation.
Thu  5 Dec 10:45:13 GMT 2019 : Azure account: AzureCAT-TD HPC (f5a67d06-2d09-4090-91cc-e3298907a021)
Thu  5 Dec 10:45:13 GMT 2019 : creating temp dir - azhpc_install_config
Thu  5 Dec 10:45:13 GMT 2019 : creating ssh keys for hpcadmin
Generating public/private rsa key pair.
Your identification has been saved in hpcadmin_id_rsa.
Your public key has been saved in hpcadmin_id_rsa.pub.
The key fingerprint is:
SHA256:sM+Wb0bByl4EoxrLV6TdkLEADSP/Mj0w94xIopH034M paul@nuc
The key's randomart image is:
+---[RSA 2048]----+
| .. ++. .o       |
|...o ...*.       |
|o ..= o=.*       |
| o ooB=*o =      |
|.  .+E*=So .     |
|    +o.++.o      |
|     . .=o       |
|       ...o      |
|         o.      |
+----[SHA256]-----+
Thu  5 Dec 10:45:13 GMT 2019 : creating resource group
Location    Name
----------  -------------------------
westeurope  paul-azurehpc-lustre-test
Thu  5 Dec 10:45:16 GMT 2019 : creating network

Thu  5 Dec 10:45:23 GMT 2019 : creating subnet compute
AddressPrefix    Name     PrivateEndpointNetworkPolicies    PrivateLinkServiceNetworkPolicies    ProvisioningState    ResourceGroup
---------------  -------  --------------------------------  -----------------------------------  -------------------  -------------------------
10.2.0.0/22      compute  Enabled                           Enabled                              Succeeded            paul-azurehpc-lustre-test
Thu  5 Dec 10:45:29 GMT 2019 : creating subnet storage
AddressPrefix    Name     PrivateEndpointNetworkPolicies    PrivateLinkServiceNetworkPolicies    ProvisioningState    ResourceGroup
---------------  -------  --------------------------------  -----------------------------------  -------------------  -------------------------
10.2.4.0/24      storage  Enabled                           Enabled                              Succeeded            paul-azurehpc-lustre-test
Thu  5 Dec 10:45:35 GMT 2019 : creating vmss: compute
Thu  5 Dec 10:45:40 GMT 2019 : creating vm: headnode
Thu  5 Dec 10:45:46 GMT 2019 : creating vmss: lustre
Thu  5 Dec 10:45:52 GMT 2019 : waiting for compute to be created
Thu  5 Dec 10:47:24 GMT 2019 : waiting for headnode to be created
Thu  5 Dec 10:47:26 GMT 2019 : waiting for lustre to be created
Thu  5 Dec 10:48:28 GMT 2019 : getting public ip for headnode
Thu  5 Dec 10:48:29 GMT 2019 : building hostlists
Thu  5 Dec 10:48:33 GMT 2019 : building install scripts
rsync azhpc_install_config to headnode0d5c95.westeurope.cloudapp.azure.com
Thu  5 Dec 10:48:42 GMT 2019 : running the install scripts
Step 0 : install_node_setup.sh (jumpbox_script)
    duration: 20 seconds
Step 1 : disable-selinux.sh (jumpbox_script)
    duration: 1 seconds
Step 2 : nfsserver.sh (jumpbox_script)
    duration: 32 seconds
Step 3 : nfsclient.sh (jumpbox_script)
    duration: 28 seconds
Step 4 : localuser.sh (jumpbox_script)
    duration: 2 seconds
Step 5 : create_raid0.sh (jumpbox_script)
    duration: 21 seconds
Step 6 : lfsrepo.sh (jumpbox_script)
    duration: 1 seconds
Step 7 : lfspkgs.sh (jumpbox_script)
    duration: 221 seconds
Step 8 : lfsmaster.sh (jumpbox_script)
    duration: 25 seconds
Step 9 : lfsoss.sh (jumpbox_script)
    duration: 5 seconds
Step 10 : lfshsm.sh (jumpbox_script)
    duration: 134 seconds
Step 11 : lfsclient.sh (jumpbox_script)
    duration: 117 seconds
Step 12 : lfsimport.sh (jumpbox_script)
    duration: 12 seconds
Step 13 : lfsloganalytics.sh (jumpbox_script)
    duration: 2 seconds
Step 14 : pbsdownload.sh (jumpbox_script)
    duration: 1 seconds
Step 15 : pbsserver.sh (jumpbox_script)
    duration: 61 seconds
Step 16 : pbsclient.sh (jumpbox_script)
    duration: 13 seconds
Step 17 : addmpich.sh (jumpbox_script)
    duration: 4 seconds
Thu  5 Dec 11:00:23 GMT 2019 : cluster ready

 

Once complete, you can connect to the headnode with the following command:

 

azhpc-connect -u hpcuser headnode

 

Provisioning with Azure CycleCloud

 

This section walks you through setting up a Lustre filesystem and an autoscaling PBSPro cluster where the Lustre client is set up.  This process uses an Azure CycleCloud project, which is available here.

 

Installing the Lustre project and templates

 

These instructions assume an Azure CycleCloud application server running and a terminal with both Git and the Azure CycleCloud CLI installed.

 

First, check out the cyclecloud-lfs repository:

 

git clone https://github.com/edwardsp/cyclecloud-lfs

 

This repository contains the Azure CycleCloud project and templates. There is an lfs template for the Lustre filesystem and a pbspro-lfs template, which is a modified version of the official pbspro template (from here). The pbspro-lfs template is included in the GitHub project to test the Lustre filesystem. Instructions for adding the Lustre client to another template are found here.

 

The following commands upload the project and import the templates to Azure CycleCloud:

 

cd cyclecloud-lfs
cyclecloud project upload <container>
cyclecloud import_template -f templates/lfs.txt
cyclecloud import_template -f templates/pbspro-lfs.txt

 

Note: Replace <container> with the Azure CycleCloud "locker" you want to use. You can list your lockers by running cyclecloud locker list.

 

Once these commands are run, you will see the new templates in your Azure CycleCloud web interface, as shown in Figure 2.

 

cyclecloud-lfs-templates.png

 

Figure 2. Azure CycleCloud web interface

 

Creating the Lustre Cluster

 

1. Create the lfs cluster and choose a name, as shown in Figure 3:

 

cyclecloud-lfs-about.png

 

Figure 3. Lustre Cluster About

 

Note: This name is used later in the PBS cluster to reference this filesystem.

 

2. Click Next to move to the Required Settings. Here you can choose the region and VM types. Only choose L_v2 instance type. It is not recommended to go beyond L32_v2 as the network thoughput does not scale linearly beyond this size. All NVME disks are combined in a RAID 0 for the OST in the virtual machine.

 

clipboard_image_3.png

Figure 4. Lustre Cluster Required Settings

 

3. Choose the Base OS in Advanced Settings. This determines which version of Lustre to use. The scripts are set up to use the Whamcloud repository for Lustre, so RPMs for Lustre 2.10 are only available up to CentOS 7.6, and Lustre 2.12 is available for CentOS 7.7.

 

Note: Both the server and client Lustre versions need to match.

 
clipboard_image_4.png

 

Figure 5. Lustre Cluster Advanced Settings

 

4. In Lustre Settings, you can choose the Lustre version and number of Additional OSS nodes. The number of OSS nodes chosen here and can't be modified without recreating the filesystem.

 

5. To use HSM, enable the checkbox and provide details for a Storage Account, Storage Key, and Storage Container. All files selected in the container are imported into Lustre when the filesystem is started.

 

Note: This only populates the metadata and files are downloaded on-demand as they are accessed. Alternatively, they can be restored using the lfs hsm_restore command.

 

6. To use Log Analytics, enable the checkbox and provide details for the Name, Log Analytics Workspace, and Log Analytics Key. The Name is the log name to use for the metrics.

 

clipboard_image_5.png

 

Figure 6. Lustre Cluster Advanced Settings

 

Click Save and start the cluster.

 

Creating the PBS Cluster

 

To test the Lustre filesystem, create a pbspro-lfs cluster as follows:

  1. Name the cluster, select the region, SKUs, and autoscale settings, and choose a subnet with access to the Lustre cluster.
  2. In the Advanced Settings make sure you know which version of CentOS you are using. At the time of writing, Cycle CentOS 7 is version 7.6, but you may want to explicitly set the version with a custom image as the Azure CycleCloud version may be updated.
  3. In Lustre Settings, choose from the available Lustre clusters in the dropdown menu.
  4. Make sure the Lustre Version is correct for the OS that is chosen and check that it matches it matches the Luster cluster.
  5. Choose the path for Lustre to be mounted on all the clients and click Save

 

clipboard_image_6.png

 

     Figure 7. PBSPro-lfs Cluster Advanced Settings

 

Once the Lustre Cluster is running, you can start this cluster.

 

Lustre performance

 

We're using ior to test the performance. Either the AzureHPC or CycleCloud version can be used, but the commands change slightly depending on the image and OS version used. The following commands relate to the lustre_combined AzureHPC example.

 

First, connect to the headnode:

 

azhpc-connect -u hpcuser headnode

 

We are compiling ior, and this requires the MPI compiler on the headnode:

 

sudo yum -y install mpich-devel

 

Now, download and compile ior:

 

module load mpi/mpich-3.0-x86_64
wget https://github.com/hpc/ior/releases/download/3.2.1/ior-3.2.1.tar.gz
tar zxvf ior-3.2.1.tar.gz
cd ior-3.2.1
./configure --prefix=$HOME/ior
make
make install

 

Move to the lustre filesystem:

 

cd /lustre

 

Create a PBS job file. For example, run_ior.pbs:

 

#!/bin/bash

source /etc/profile
module load mpi/mpich-3.0-x86_64

cd $PBS_O_WORKDIR

NP=$(wc -l <$PBS_NODEFILE)
NODES=$(sort -u $PBS_NODEFILE | wc -l)
PPN=$(($NP / $NODES))

TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")

mpirun -np $NP -machinefile $PBS_NODEFILE \
    $HOME/ior/bin/ior -a POSIX -v -z -i 1 -m -d 1 -B -e -F -r -w -t 32m -b 4G \
    -o $PWD/test.$TIMESTAMP \
    | tee ior-${NODES}x${PPN}.$TIMESTAMP.log

 

Submit an ior benchmark as follows:

 

client_nodes=2
procs_per_node=32
qsub -lselect=${client_nodes}:ncpus=${procs_per_node}:mpiprocs=${procs_per_node},place=scatter:excl run_ior.pbs

 

Figure 8 shows the results of testing the bandwidth of the Lustre filesystem, scaling from 1 to 16 OSS VMs.

 

clipboard_image_7.png

 

Figure 8. IOR benchmark results

 

In each run, the same number of client VMs were used as there are OSS VMs and 32 processes were run on each client VM. Each client VM is a D32_v3, which has expected bandwidth of 16,000 Mbps (see here) and each OSS VM is an L32_v2, which has the expected bandwidth of 12,800 Mbps (see here). This means that a single client should be able to saturate the bandwidth of one OSS. The max network is the expected bandwidth from the OSS multiplied by the number of OSS VMs.

 

Using HSM

 

The AzureHPC examples and the Azure CycleCloud templates set up HSM on Lustre and import the storage container when the filesystem is created. Only the metadata is read, so files are downloaded on-demand as they are accessed. But, other than on-demand downloads, all the other commands for archival are not automatic.

 

The copytool for Azure is available here. This copytool supports users, groups, and UNIX file permissions that are added as meta-data to the files stored in Azure Blob storage.

 

HSM commands

 

The HSM actions are available with the lfs command. All the commands that follow work with multiple files as arguments.

 

Archive

 

The lfs hsm_archive command copies the file to Azure Blob storage. Example usage:

 

$ sudo lfs hsm_archive myfile

 

Release

 

The lfs hsm_release command releases an archived file from the Lustre filesystem. It no longer takes up space in Lustre, but it still appears in the filesystem. When opened, it's downloaded again. Example usage:

 

$ sudo lfs hsm_release myfile

 

Remove

 

The lfs hsm_remove command deletes an archived file from the archive.

 

$ sudo lfs hsm_remove myfile

 

State

 

The lfs hsm_state command shows the state of the file in the filesystem. This is output for a file that isn't archived:

 

$ sudo lfs hsm_state myfile 
myfile: (0x00000000)

 

This is output for a file that is archived:

 

$ sudo lfs hsm_state myfile 
myfile: (0x0000000d) exists archived, archive_id:1

 

This is output for a file that is archived and released (that is, in storage but not taking up space in the filesystem):

 

$ sudo lfs hsm_state myfile 
myfile: (0x0000000d) released exists archived, archive_id:1

 

Action

 

The lfs hsm_action command displays the current HSM request for a given file. This is most useful when checking the progress on files being archived or restored. When there is no ongoing or pending HSM request, it displays NOOP for the file.

 

Rehydrating the whole filesystem from blob storage

 

In certain cases, you may want to restore all the released (or imported) files into the filesystem. This is best used in cases where all the files are required and you don't want the application to wait for each file to be retrieved separately. This can be started with the following command:

 

cd <lustre_root>
find . -type f -print0 | xargs -r0 -L 50 sudo lfs hsm_restore

 

The progress of the files can be checked with sudo lfs hsm_action. To find out how many files are left to be restored, use the following command:

 

cd <lustre_root>
find . -type f -print0 \
    | xargs -r0 -L 50 sudo lfs hsm_restore \
    | grep -v NOOP \
    | wc -l

 

Viewing Lustre metrics in Log Analytics

 

Each Lustre VM logs the following metrics every sixty seconds if log analytics is enabled:

  • Load average
  • Kilobytes free
  • Network bytes sent
  • Network bytes received

 

You can view this in the portal by selecting Monitor and then Logs. Here is an example query:

 

<log-name>_CL
| summarize max(loadavg_d),max(bytessend_d),max(bytesrecv_d) by bin(TimeGenerated,1m), hostname_s
| render timechart

 

Note: Substitute <log-name> for the name you chose.

 
clipboard_image_8.png

 

Figure 9. Log Analytics data

 

Summary

 

This post outlined setting up a Lustre filesystem and PBS cluster using AzureHPC and Azure CycleCloud. It uses the HSM capabilities for archival and backup to Azure Blob Storage and viewing metrics in Log Analytics. This is a cost-effective way to provision a high-performance filesystem on Azure.

 

Learn more