Home
%3CLINGO-SUB%20id%3D%22lingo-sub-847604%22%20slang%3D%22en-US%22%3EUsing%20BeeGFS%20storage%20pools%20to%20provide%20flexible%20performance%20and%20data%20persistence%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-847604%22%20slang%3D%22en-US%22%3E%3CH2%20id%3D%22toc-hId-1820264666%22%20id%3D%22toc-hId-1820264666%22%3EIntroduction%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOn%20Azure%20there%20are%20performance%20and%20cost%20advantages%20by%20deploying%20a%20BeeGFS%20parallel%20filesystem%20using%20NVMe%20SSDs%20for%20storage%20and%20metadata%20(e.g.%20L8s_v2%20SKU).%20NVMe%20SSD%E2%80%99s%20provide%20the%20superior%20throughput%20and%20IOPS%20advantages%20and%20can%20significantly%20reduce%20cost%20and%20complexity%20by%20utilizing%20the%20NVMe%20SSDs%20that%20come%20with%20the%20SKU%20(e.g.%20Lsv2%20SKUs)%20and%20not%20have%20to%20configure%2C%20install%20and%20pay%20for%20additional%20data%20disks.%3C%2FP%3E%0A%3CP%3EThe%20primary%20disadvantage%20of%20using%20NVMe%20SSDs%20in%20your%20BeeGFS%20deployment%20is%20that%20these%20disks%20are%20not%20persistent%2C%20and%20you%20will%20lose%20your%20data%20once%20you%20stop%2Fdeallocate%20your%20BeeGFS%20filesystem.%20One%20approach%20to%20overcome%20this%20limitation%20is%20to%20use%20the%20storage%20pools%20feature%20of%20BeeGFS%2C%20which%20allows%20you%20to%20have%20mixed%20disks%20as%20part%20of%20your%20BeeGFS%20filesystem.%20The%20idea%20is%20to%20add%20cheap%20HDD%20managed%20disks%20to%20an%20NVMe%20SSD%20based%20BeeGFS%20parallel%20filesystem%20to%20provide%20data%20persistence.%20The%20details%20of%20how%20to%20deploy%20BeeGFS%20with%20NVMe%20and%20HDD%20using%20storage%20pools%20will%20be%20discussed%20below.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--731892295%22%20id%3D%22toc-hId--731892295%22%3EBeeGFS%20Storage%20pools%20architecture%20(NVMe%20SSD%20and%20HDD)%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20style%3D%22width%3A%20503px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F131001i821F30F8A98812F8%2Fimage-dimensions%2F503x151%3Fv%3D1.0%22%20width%3D%22503%22%20height%3D%22151%22%20alt%3D%22beegfs_pools.png%22%20title%3D%22beegfs_pools.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH6%20id%3D%22toc-hId-224864020%22%20id%3D%22toc-hId-224864020%22%3EFig.%201%20BeeGFS%20Storage%20pools%20architecture%2C%20BeeGFS%20storage%20and%20metadata%20share%20an%20NVME%20SSD%20on%20each%20VM%2C%20each%20VM%20has%20a%20HDD%20data%20disk%20attached.%20There%20are%20two%20storage%20pools%2C%20the%20NVMe%20SSD%20storage%20pool%20for%20performance%20or%20I%2FO%20processing%20and%20the%20HDD%20storage%20pool%20for%20data%20persistence%2Fback-up.%20The%20BeeGFS%20manager%20is%20only%20used%20for%20BeeGFS%20administration%20and%20can%20be%20deployed%20on%20a%20smaller%20VM.%3C%2FH6%3E%0A%3CH2%20id%3D%22toc-hId--1541238921%22%20id%3D%22toc-hId--1541238921%22%3E%3CBR%20%2F%3EDeployment%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20AzureCAT%20HPC%20azurehpc%20repository%20will%20be%20used%20to%20deploy%20the%20BeeGFS%20storage%20pools%20architecture.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3Egit%20clone%20git%40github.com%3AAzure%2Fazurehpc.git%3C%2FPRE%3E%0A%3CP%3EWe%20will%20be%20following%20closely%20the%20azurehpc%20beegfs_pools%20example%20(in%20azurehpc%2Fexamples%2Fbeegfs_pools).%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20style%3D%22width%3A%20567px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F131003iCB39A6AEADA0797C%2Fimage-dimensions%2F567x193%3Fv%3D1.0%22%20width%3D%22567%22%20height%3D%22193%22%20alt%3D%22diagram.png%22%20title%3D%22diagram.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH6%20id%3D%22toc-hId--584482606%22%20id%3D%22toc-hId--584482606%22%3E%26nbsp%3BFig%203.%20Details%20of%20the%20BeeGFS%20storage%20pools%20configuration%20using%20the%20config.json%20file%20in%20the%20azurehpc%20beegfs_pools%20example.%3C%2FH6%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3EInitialize%20a%20new%20azurehpc%20project%2C%20using%20the%20azurehpc-init%20command.%20(creates%20working%20config.json%20file)%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3E%24%20azhpc-init%20-c%20%24azhpc_dir%2Fexamples%2Fbeegfs_pools%20-d%20beegfs_pools%20-s%3CBR%20%2F%3E%3CBR%20%2F%3EFri%20Jun%2028%2008%3A50%3A25%20UTC%202019%20%3A%20variables%20to%20set%3A%20%22-v%20location%3D%2Cresource_group%3D%22%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3ESet%20config.json%20template%20with%20your%20desired%20values.%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Eazhpc-init%20-c%20%24azhpc_dir%2Fexamples%2Fbeegfs_pools%20-d%20beegfs_pools%20-v%20location%3Dwestus2%2C%3CBR%20%2F%3Eresource_group%3Dazhpc-cluster%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3ECreate%20the%20BeeGFS%20storage%20pool%20parallel%20filesystem%20(NVMe%20SSD%20and%20HDD)%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Eazhpc-build%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3EConnect%20to%20the%20BeeGFS%20master%2Fmanager%20node%20(beegfsm)%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Eazhpc-connect%20-u%20hpcuser%20beegfsm%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3ECheck%20that%20the%20BeeGFS%20storage%20pools%20are%20set-up%20and%20available%20for%20use.%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3E%24%20beegfs-ctl%20--liststoragepools%3CBR%20%2F%3E%3CBR%20%2F%3EPool%20ID%20Pool%20Description%20Targets%20Buddy%20Groups%3CBR%20%2F%3E%3D%3D%3D%3D%3D%3D%3D%20%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%20%3D%3D%3D%3D%3D%3CBR%20%2F%3E1%20Default%202%2C4%3CBR%20%2F%3E2%20hdd_pool%201%2C3%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3CLI%3EThe%20HDD%20disks%20can%20be%20accessed%20at%20the%20%2Fbeegfs%2Fhdd_pools%20(i.e.%20storage%20pool%202)%20mount%20point%20and%20the%20NVMe%20SSD%20disks%20at%20%2Fbeegfs%20(i.e.%20storage%20pool%201).%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1944381749%22%20id%3D%22toc-hId-1944381749%22%3EData%20Migration%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH6%20id%3D%22toc-hId--1393829232%22%20id%3D%22toc-hId--1393829232%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20style%3D%22width%3A%20278px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F131022i90D9FF668E8A3CE2%2Fimage-size%2Fmedium%3Fv%3D1.0%26amp%3Bpx%3D400%22%20alt%3D%22beegfs_pools_data_flow.png%22%20title%3D%22beegfs_pools_data_flow.png%22%20%2F%3E%3C%2FSPAN%3EFig%2C%202%20BeeGFS%20storage%20pools%20data%20workflow.%20If%20the%20BeeGFS%20data%20is%20already%20available%20on%20HDD%2C%20then%20migrate%20it%20to%20the%20NVMe%20SSDs%20before%20doing%20any%20I%2FO%20processing.%3C%2FH6%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAfter%20I%2FO%20processing%20is%20complete%2C%20the%20data%20can%20be%20migrated%20to%20permanent%20storage%20(HDD)%20using%20the%20following%20procedure.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3ECopy%20data%20to%20HDD%20permanent%20storage.%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Ecp%20-R%20%2Fbeegfs%2Fdata%20%2Fbeegfs%2Fhdd_pools%3C%2FPRE%3E%0Aor%3C%2FLI%3E%0A%3CLI%3E%0A%3CPRE%3Ebeegfs-ctl%20%E2%80%93migrate%20%E2%80%93storagepoolid%3D1%20%E2%80%93destinationpoolid%3D2%20%2Fbeegfs%2Fdata%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3ESave%2Fcopy%20the%20beegfs%20metadata%20to%20beegfs%20storage%2Fmetadata%20VM%E2%80%99s%20permanent%20home%20directories%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3EWCOLL%3Dbeegfssm%20%22cd%20%2Fmnt%2Fbeegfs%3Bsudo%20tar%20czvf%20%2Fhome%2F%24%7Buser%7D%2Fbeegfs_meta.tar.gz%20meta%2F%3CBR%20%2F%3E%20--xattrs%22%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3EThe%20BeeGFS%20parallel%20filesystem%20can%20now%20be%20stopped.%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%3CBR%20%2F%3EAfter%20the%20BeeGFS%20parallel%20filesystem%20is%20restarted%20the%20BeeGFS%20data%20can%20be%20migrated%20back%20to%20the%20BeeGFS%20NVMe%20SSDs%20with%20the%20following%20procedure.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3ERestart%20the%20BeeGFS%20parallel%20filesystem%3C%2FLI%3E%0A%3CLI%3ERestore%20the%20BeeGFS%20metadata.%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3EWCOLL%3D%20beegfssm%20%22cd%20%2Fmnt%2Fbeegfs%3Bsudo%20tar%20xvf%20%2Fhome%2F%24%7Buser%7D%2Fbeegfs_meta.tar.gz%20%3CBR%20%2F%3E--xattrs%22%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3ERestore%20the%20BeeGFS%20storage%20data%20(to%20NVMe%20SSD%E2%80%99s)%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Ecp%20-R%20%2Fbeegfs%2Fhdd_pools%2Fdata%20%2Fbeegfs%2Fdata%3C%2FPRE%3E%0Aor%3C%2FLI%3E%0A%3CLI%3E%0A%3CPRE%3Ebeegfs-ctl%20%E2%80%93migrate%20%E2%80%93storagepoolid%3D2%20%E2%80%93destinationpoolid%3D1%20%2Fbeegfs%2Fdata%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CH2%20id%3D%22toc-hId-1135035123%22%20id%3D%22toc-hId-1135035123%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--1417121838%22%20id%3D%22toc-hId--1417121838%22%3ETesting%20BeeGFS%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId-325688497%22%20id%3D%22toc-hId-325688497%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CP%3EThe%20azurehpc%20repository%20contains%20a%20number%20of%20scripts%20to%20measure%20storage%20throughput%20and%20IOPTS%20using%20IOR%20and%20FIO%20benchmark%20codes%2C%20see%20azurehpc%2Fapps%2Fior%20and%20azurehpc%2Fapps%2Ffio.%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1341191899%22%20id%3D%22toc-hId--1341191899%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId-401618436%22%20id%3D%22toc-hId-401618436%22%3E%3CBR%20%2F%3EConclusion%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBeeGFS%20has%20a%20built-in%20feature%20called%20storage%20pools%20which%20allows%20a%20parallel%20filesystem%20to%20be%20deployed%20with%20different%20types%20of%20disks%2C%20creating%20pools%20of%20storage%20resources%20with%20different%20performance%20characteristics%20for%20flexibility.%20This%20feature%20can%20be%20utilized%20to%20provide%20an%20ephemeral%20based%20BeeGFS%20parallel%20filesystem%20with%20data%20persisted%20by%20adding%20low%20cost%20HDD%20data%20disks.%3CBR%20%2F%3EThe%20deployment%20of%20BeeGFS%20storage%20pools%20has%20been%20automated%20in%20the%20AzureCAT%20HPC%20azurehpc%20repository.%20The%20checkpoint%2Frestart%20of%20a%20BeeGFS%20parallel%20filesystem%20has%20been%20discussed%20and%20how%20the%20data%20can%20be%20migrated%20to%2Ffrom%20NVMe%20SSD%E2%80%99d%20and%20HDD%E2%80%99s.%20%3CBR%20%2F%3EThis%20solution%20provides%20the%20best%20of%20both%20worlds%2C%20fast%20performance%20of%20NVMe%20SSD%E2%80%99s%20with%20the%20cheap%20cost%20of%20persistent%20HDD%E2%80%99s.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-2144428771%22%20id%3D%22toc-hId-2144428771%22%3EReferences%3C%2FH2%3E%0A%3COL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fazurehpc%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EAzureCat%20HPC%20azurehpc%20repository%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fwww.beegfs.io%2Fwiki%2FStoragePools%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBeeGFS%20storage%20pools%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FOL%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-847604%22%20slang%3D%22en-US%22%3E%3CP%3EOn%20Azure%20there%20are%20performance%20and%20cost%20advantages%20by%20deploying%20a%20BeeGFS%20parallel%20filesystem%20using%20NVMe%20SSDs%20for%20storage%20and%20metadata%20(e.g.%20L8s_v2%20SKU).%20NVMe%20SSD%E2%80%99s%20provide%20the%20superior%20throughput%20and%20IOPS%20advantages%20and%20can%20significantly%20reduce%20cost%20and%20complexity%20by%20utilizing%20the%20NVMe%20SSDs%20that%20come%20with%20the%20SKU%20(e.g.%20Lsv2%20SKUs)%20and%20not%20have%20to%20configure%2C%20install%20and%20pay%20for%20additional%20data%20disks.%3C%2FP%3E%3C%2FLINGO-TEASER%3E
Microsoft

Introduction

 

On Azure there are performance and cost advantages by deploying a BeeGFS parallel filesystem using NVMe SSDs for storage and metadata (e.g. L8s_v2 SKU). NVMe SSD’s provide the superior throughput and IOPS advantages and can significantly reduce cost and complexity by utilizing the NVMe SSDs that come with the SKU (e.g. Lsv2 SKUs) and not have to configure, install and pay for additional data disks.

The primary disadvantage of using NVMe SSDs in your BeeGFS deployment is that these disks are not persistent, and you will lose your data once you stop/deallocate your BeeGFS filesystem. One approach to overcome this limitation is to use the storage pools feature of BeeGFS, which allows you to have mixed disks as part of your BeeGFS filesystem. The idea is to add cheap HDD managed disks to an NVMe SSD based BeeGFS parallel filesystem to provide data persistence. The details of how to deploy BeeGFS with NVMe and HDD using storage pools will be discussed below.

 

BeeGFS Storage pools architecture (NVMe SSD and HDD)

 

beegfs_pools.png

Fig. 1 BeeGFS Storage pools architecture, BeeGFS storage and metadata share an NVME SSD on each VM, each VM has a HDD data disk attached. There are two storage pools, the NVMe SSD storage pool for performance or I/O processing and the HDD storage pool for data persistence/back-up. The BeeGFS manager is only used for BeeGFS administration and can be deployed on a smaller VM.


Deployment

 

The AzureCAT HPC azurehpc repository will be used to deploy the BeeGFS storage pools architecture.

 

git clone git@github.com:Azure/azurehpc.git

We will be following closely the azurehpc beegfs_pools example (in azurehpc/examples/beegfs_pools).

 

 

diagram.png

 Fig 3. Details of the BeeGFS storage pools configuration using the config.json file in the azurehpc beegfs_pools example.

 

  1. Initialize a new azurehpc project, using the azurehpc-init command. (creates working config.json file)
    1. $ azhpc-init -c $azhpc_dir/examples/beegfs_pools -d beegfs_pools -s

      Fri Jun 28 08:50:25 UTC 2019 : variables to set: "-v location=,resource_group="
  2. Set config.json template with your desired values.
    1. azhpc-init -c $azhpc_dir/examples/beegfs_pools -d beegfs_pools -v location=westus2,
      resource_group=azhpc-cluster
  3. Create the BeeGFS storage pool parallel filesystem (NVMe SSD and HDD)
    1. azhpc-build
  4. Connect to the BeeGFS master/manager node (beegfsm)
    1. azhpc-connect -u hpcuser beegfsm
  5. Check that the BeeGFS storage pools are set-up and available for use.
    1. $ beegfs-ctl --liststoragepools

      Pool ID Pool Description Targets Buddy Groups
      ======= ================== =====
      1 Default 2,4
      2 hdd_pool 1,3
    2. The HDD disks can be accessed at the /beegfs/hdd_pools (i.e. storage pool 2) mount point and the NVMe SSD disks at /beegfs (i.e. storage pool 1).

 

Data Migration

 

 

 

beegfs_pools_data_flow.pngFig, 2 BeeGFS storage pools data workflow. If the BeeGFS data is already available on HDD, then migrate it to the NVMe SSDs before doing any I/O processing.

 

After I/O processing is complete, the data can be migrated to permanent storage (HDD) using the following procedure.

 

  1. Copy data to HDD permanent storage.
    1. cp -R /beegfs/data /beegfs/hdd_pools
      or
    2. beegfs-ctl –migrate –storagepoolid=1 –destinationpoolid=2 /beegfs/data 
  2. Save/copy the beegfs metadata to beegfs storage/metadata VM’s permanent home directories
    1. WCOLL=beegfssm "cd /mnt/beegfs;sudo tar czvf /home/${user}/beegfs_meta.tar.gz meta/
      --xattrs"
  3. The BeeGFS parallel filesystem can now be stopped.


After the BeeGFS parallel filesystem is restarted the BeeGFS data can be migrated back to the BeeGFS NVMe SSDs with the following procedure.

 

  1. Restart the BeeGFS parallel filesystem
  2. Restore the BeeGFS metadata.
    1. WCOLL= beegfssm "cd /mnt/beegfs;sudo tar xvf /home/${user}/beegfs_meta.tar.gz 
      --xattrs"
  3. Restore the BeeGFS storage data (to NVMe SSD’s)
    1. cp -R /beegfs/hdd_pools/data /beegfs/data
      or
    2. beegfs-ctl –migrate –storagepoolid=2 –destinationpoolid=1 /beegfs/data

 

Testing BeeGFS

 

The azurehpc repository contains a number of scripts to measure storage throughput and IOPTS using IOR and FIO benchmark codes, see azurehpc/apps/ior and azurehpc/apps/fio.

 


Conclusion

 

BeeGFS has a built-in feature called storage pools which allows a parallel filesystem to be deployed with different types of disks, creating pools of storage resources with different performance characteristics for flexibility. This feature can be utilized to provide an ephemeral based BeeGFS parallel filesystem with data persisted by adding low cost HDD data disks.
The deployment of BeeGFS storage pools has been automated in the AzureCAT HPC azurehpc repository. The checkpoint/restart of a BeeGFS parallel filesystem has been discussed and how the data can be migrated to/from NVMe SSD’d and HDD’s.
This solution provides the best of both worlds, fast performance of NVMe SSD’s with the cheap cost of persistent HDD’s.

 

References

  1. AzureCat HPC azurehpc repository
  2. BeeGFS storage pools