Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1015446%22%20slang%3D%22en-US%22%3ETuning%20BeeGFS%20and%20BeeOND%20on%20Azure%20for%20specific%20I%2FO%20patterns%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1015446%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.beegfs.io%2Fwiki%2FTableOfContents%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBeeGFS%3C%2FA%3E%20and%20BeeGFS%20On%20Demand%20(%3CA%20href%3D%22https%3A%2F%2Fwww.beegfs.io%2Fwiki%2FBeeOND%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBeeOND%3C%2FA%3E)%20are%20popular%20parallel%20file%20systems%20used%20to%20handle%20the%20I%2FO%20requirements%20for%20many%20high%20performance%20computing%20(HPC)%20workloads.%20Both%20run%20well%20on%20Azure%2C%20and%20both%20can%20harness%20the%20performance%20of%20multiple%20solid%20state%20drives%20(SSDs)%20to%20provide%20high%20aggregate%20I%2FO%20performance.%20However%2C%20the%20default%20configuration%20settings%20are%20not%20optimal%20for%20all%20I%2FO%20patterns.%20This%20post%20provides%20some%20tuning%20suggestions%20to%20improve%20BeeGFSand%20BeeOND%20performance%20for%20a%20number%20of%20specific%20I%2FO%20patterns.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20following%20tuning%20options%20are%20explored%20and%20examples%20of%20how%20they%20are%20tuned%20for%20different%20I%2FO%20patterns%20are%20discussed%3A%3C%2FP%3E%0A%3CUL%20style%3D%22margin-top%3A%200in%3B%22%20type%3D%22disc%22%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3ENumber%20of%20managed%20disk%20or%20number%20of%20local%20NVMe%20SSDs%20per%20VM.%3C%2FLI%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3EBeeGFS%20chunk%20size%20and%20number%20of%20targets%3C%2FLI%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3ENumber%20of%20metadata%20servers.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1444018218%22%20id%3D%22toc-hId--1444018218%22%3EDeploying%20BeeGFS%20and%20BeeOND%3C%2FH2%3E%0A%3CP%3EThe%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FAzure%2Fazurehpc%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eazurehpc%3C%2FA%3E%20github%20repository%20contains%20scripts%20to%20automatically%20deploy%20BeeGFS%20and%20BeeOND%20parallel%20file%20systems%20on%20Azure.%20(See%20the%20azurehpc%2Fexamples%20directory.)%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3Egit%20clone%20git%40github.com%3AAzure%2Fazurehpc.git%3C%2FPRE%3E%0A%3CP%3EHow%20and%20where%20you%20deploy%20BeeGFS%20in%20relation%20to%20the%20compute%20node%20affects%20latency%2C%20so%20make%20sure%20they%20are%20in%20close%20proxmity%20to%20one%20another.%20Follow%20these%20pointers%3A%3C%2FP%3E%0A%3CUL%20style%3D%22margin-top%3A%200in%3B%22%20type%3D%22disc%22%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3EIf%20you%20are%20using%20a%20Virtual%20Machine%20Scale%20Set%20to%20deploy%20BeeGFS%20or%20a%20compute%20cluster%2C%20make%20sure%20that%20%3CSTRONG%3Esingleplacementgroup%3C%2FSTRONG%3E%20is%20set%20to%20%3CSTRONG%3Etrue%3C%2FSTRONG%3E.%3C%2FLI%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3EIf%20you%20are%20using%20managed%20disks%20for%20BeeGFS%2C%20set%20the%20%3CSTRONG%3Evm%3C%2FSTRONG%3E%20tag%20to%20%3CSTRONG%3EPerformance%20Optimization%3C%2FSTRONG%3E%2C%20which%20ensures%20that%20the%20managed%20disks%20are%20close%20to%20the%20VMs.%3C%2FLI%3E%0A%3CLI%20style%3D%22color%3A%20%23000000%3B%20%3B%20font-size%3A%2011pt%3B%20font-style%3A%20normal%3B%20font-weight%3A%20400%3B%22%3EInclude%20the%20BeeGFS%20cluster%20and%20the%20compute%20nodes%20in%20the%20same%20%3CSTRONG%3Eproximity%20placement%20group%3C%2FSTRONG%3E.%20This%20guarantees%20that%20all%20resources%20in%20the%20group%20are%20in%20the%20same%20datacenter.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CH2%20id%3D%22toc-hId-1043494615%22%20id%3D%22toc-hId-1043494615%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--763959848%22%20id%3D%22toc-hId--763959848%22%3EDisk%20layout%20considerations%3C%2FH2%3E%0A%3CP%3EWhen%20deploying%20a%20BeeGFS%20or%20BeeOND%20parallel%20file%20system%20on%20Azure%20using%20managed%20disks%20for%20storage%2C%20the%20number%20and%20type%20of%20disks%20need%20to%20be%20carefully%20considered.%20The%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fwindows%2Fsizes%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EVM%20disks%20throughput%20and%20IOPS%20limits%3C%2FA%3E%20dictate%20the%20maximum%20amount%20of%20I%2FO%20that%20each%20VM%20in%20the%20parallel%20file%20system%20can%20support.%20Azure%20managed%20disks%20have%20built-in%20redundancy.%20At%20a%20minimum%2C%20locally%20redundant%20storage%20(LRS)%20keeps%20three%20copies%20of%20data%2C%20and%20so%20RAID%200%20striping%20is%20enough.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20general%2C%20for%20large%20throughput%2C%20striping%20more%20slower%20disks%20than%20fewer%20faster%20disks%20is%20preferred.%20As%20Figure%201%20shows%2C%20eight%20P20%20disks%20gives%20better%20performance%20than%20four%20P30%20disks%2C%20while%20eight%20P30%20disks%20does%20not%20improve%20performance%20because%20of%20the%20VM%20(E32s_v3)%20disk%20throughput%20throttling%20limit.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20977px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159091i635C0188E816CFCF%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22Fig1-StripingDisks.png%22%20title%3D%22Fig1-StripingDisks.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CI%3EFig.%201.%20Striping%20a%20larger%20number%20of%20slower%20disks%20(e.g.%208xP20)%20gives%20better%20throughput%20than%20striping%20fewer%20faster%20disks%20(e.g.%203xP30).%20Striping%20more%20P30%20disks%20(e.g.%208xP30)%20does%20not%20improve%20the%20performance%20because%20of%20disk%20throughput%20throttling%20limits.%3C%2FI%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20the%20case%20of%20local%20NVMe%20SSDs%20(such%20as%20Lsv2)%2C%20BeeGFS%20storage%20targets%20that%20have%20a%20single%20NVMe%20disk%20give%20the%20best%20aggregate%20throughput%20and%20IOPS.%20For%20very%20large%20BeeGFS%20parallel%20file%20systems%2C%20striping%20more%20NVMe%20disks%20per%20storage%20target%20can%20simplify%20the%20management%20and%20deployment.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENote%20that%20the%20performance%20of%20a%20BeeGFS%20parallel%20file%20system%20that%20uses%20ephemeral%20NVMe%20disks%20(such%20as%20LSv2)%20is%20limited%20by%20the%20VM%20network%20throttling%20limits.%20In%20other%20words%2C%20I%2FO%20performance%20is%20network-bound.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20style%3D%22width%3A%20904px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159072iE3D59C048234CA90%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22Fig2-PerformanceOfReads.png%22%20title%3D%22Fig2-PerformanceOfReads.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CI%3EFig.%202.%20Performance%20of%20reads%20and%20writes%20varies%20by%20number%20of%20NVMes%20per%20VM.%20The%20best%20aggregate%20throughput%20for%20BeeGFS%20was%20using%20L8sv2.%3C%2FI%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CI%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20904px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159074iA4963DF789A56271%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22Fig3-SingleNVMe.png%22%20title%3D%22Fig3-SingleNVMe.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FI%3E%3CI%3E%26nbsp%3B%20%3C%2FI%3E%3C%2FP%3E%0A%3CP%3E%3CI%3EFig.%203.%20A%20single%20NVMe%20per%20VM%20may%20gives%20better%20aggregate%20I0PS%20performance.%3C%2FI%3E%3C%2FP%3E%0A%3CP%3E%3CI%3E%26nbsp%3B%3C%2FI%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1723552985%22%20id%3D%22toc-hId-1723552985%22%3EChunk%20size%20and%20number%20of%20targets%3C%2FH2%3E%0A%3CP%3EBeeGFS%20and%20BeeOND%20parallel%20file%20systems%20consist%20of%20several%20storage%20targets.%20Typically%2C%20each%20VM%20is%20a%20storage%20target.%20It%20can%20use%20managed%20disks%20or%20local%20NVMe%20SSDs%2C%20and%20it%20can%20be%20striped%20with%20RAID%200.%20The%20chunk%20size%20is%20the%20amount%20of%20data%20sent%20to%20each%20target%2C%20to%20be%20processed%20in%20parallel.%20The%20default%20chunk%20size%20is%20512%20K%2C%20and%20the%20default%20number%20of%20storage%20targets%20is%20four%20as%20the%20following%20script%20shows%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3Ebeegfs-ctl%20%E2%80%93getentryinfo%20%2Fbeegfs%3CBR%20%2F%3EEntryID%3A%20root%3CBR%20%2F%3EMetadata%20buddy%20group%3A%201%3CBR%20%2F%3ECurrent%20primary%20metadata%20node%3A%20cgbeegfsserver-4%20%5BID%3A%201%5D%3CBR%20%2F%3EStripe%20pattern%20details%3A%3CBR%20%2F%3E%2B%20Type%3A%20RAID0%3CBR%20%2F%3E%2B%20Chunksize%3A%20512K%3CBR%20%2F%3E%2B%20Number%20of%20storage%20targets%3A%20desired%3A%204%3CBR%20%2F%3E%2B%20Storage%20Pool%3A%201%20(Default)%3C%2FPRE%3E%0A%3CP%3EFortunately%2C%20you%20can%20change%20the%20chunk%20size%20and%20number%20of%20storage%20targets%20for%20a%20directory.%20This%20means%20that%20you%20can%20tune%20each%20directory%20for%20a%20different%20I%2FO%20pattern%20by%20adjusting%20the%20chunk%20size%20and%20number%20of%20targets.%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3EFor%20example%2C%20to%20set%20the%20chunk%20size%20to%201%20MB%20and%20number%20of%20storage%20targets%20to%208%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3Ebeegfs-ctl%20--setpattern%20--chunksize%3D1m%20--numtargets%3D8%20%2Fbeegfs%2Fchunksize_1m_4t%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3BA%20rough%20formula%20for%20setting%20the%20optimal%20chunksize%20is%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3EChunksize%20%3D%20I%2FO%20Blocksize%20%2F%20Number%20of%20targets%3C%2FPRE%3E%0A%3CH2%20id%3D%22toc-hId--83901478%22%20id%3D%22toc-hId--83901478%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--1891355941%22%20id%3D%22toc-hId--1891355941%22%3ENumber%20of%20metadata%20servers%3C%2FH2%3E%0A%3CP%3EIf%20you%20set%20up%20BeeGFS%20or%20BeeOND%20parallel%20file%20systems%20that%20do%20a%20lot%20of%20metadata%20operations%20(such%20as%20creating%20files%2C%20deleting%20files%2C%20stat%20operations%20on%20files%2C%20and%20checking%20file%20attributes%20and%20permissions)%2C%20you%20must%20choose%20the%20right%20number%20and%20type%20of%20metadata%20servers%20to%20get%20the%20best%20performance.%20By%20default%2C%20BeeOND%20configures%20only%20one%20metadata%20server%20no%20matter%20how%20many%20nodes%20are%20in%20your%20BeeOND%20parallel%20file%20system.%20Fortunately%2C%20you%20can%20change%20the%20default%20number%20of%20metadata%20servers%20used%20in%20BeeOND.%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3ETo%20increase%20the%20number%20of%20metadata%20servers%20in%20a%20BeeOND%20parallel%20file%20system%20to%204%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3Ebeeond%20start%20-P%20-m%204%20-n%20hostfile%20-d%20%2Fmnt%2Fresource%2FBeeOND%20-c%20%2Fmnt%2FBeeOND%3C%2FPRE%3E%0A%3CH2%20id%3D%22toc-hId-596156892%22%20id%3D%22toc-hId-596156892%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--1211297571%22%20id%3D%22toc-hId--1211297571%22%3ELarge%20Shared%20Parallel%20I%2FO%3C%2FH2%3E%0A%3CP%3EThe%20shared%20parallel%20I%2FO%20format%20is%20a%20common%20in%20HPC%20in%20which%20all%20parallel%20processes%20write%2Fread%20to%2Ffrom%20the%20same%20file%20(N-1%20I%2FO).%20Examples%20of%20this%20format%20are%20MPIO%2C%20HDF5%2C%20and%20netcdf.%20The%20default%20settings%20in%20BeeGFS%20and%20BeeOND%20are%20not%20optimal%20for%20this%20I%2FO%20pattern%2C%20and%20you%20can%20improve%20performance%20significantly%20by%20increasing%20the%20number%20of%20storage%20targets.%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3EIt%20is%20also%20important%20to%20be%20aware%20that%20the%20number%20of%20targets%20used%20for%20shared%20parallel%20I%2FO%20will%20limit%20the%20size%20of%20the%20file.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20998px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159092i3B70C3F201BD93C9%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22Fig4-TuningBeeOND.png%22%20title%3D%22Fig4-TuningBeeOND.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CEM%3EFig.%204.%20By%20tuning%20BeeOND%20(increasing%20the%20number%20of%20storage%20targets)%2C%20we%20get%20significant%20improvements%20in%20I%2FO%20performance%20for%20HDF5%20(shared%20parallel%20I%2FO%20format).%3C%2FEM%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1276215262%22%20id%3D%22toc-hId-1276215262%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--531239201%22%20id%3D%22toc-hId--531239201%22%3ESmall%20I%2FO%20(high%20IOPS)%3C%2FH2%3E%0A%3CP%3EBeeGFS%20and%20BeeOND%20are%20primarily%20designed%20for%20large%20throughput%20I%2FO%2C%20but%20there%20are%20times%20when%20the%20I%2FO%20pattern%20of%20interest%20is%20high%20IOPS.%20The%20default%20configuration%20is%20not%20ideal%20for%20this%20I%2FO%20pattern%2C%20but%20some%20tuning%20modification%20can%20improve%20the%20performance.%20To%20improve%20IOPS%2C%20it%20is%20usually%20best%20to%20reduce%20the%20chunk%20size%20to%20the%20minimum%20of%2064%20K%20(assuming%20the%20I%2FO%20blocksize%20is%20less%20than%2064%20K)%20and%20reduce%20the%20number%20of%20targets%20to%20one.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--535841206%22%20id%3D%22toc-hId--535841206%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20977px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159200i7F78C64CB5B0EB39%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22beegfs_tuning_fig3.png%22%20title%3D%22beegfs_tuning_fig3.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FH3%3E%0A%3CP%3E%3CEM%3EFig.%205.%20BeeGFS%20and%20BeeOND%20IOPS%20performance%20is%20improved%20by%20reducing%20the%20chunk%20size%20to%2064%26nbsp%3BK%20and%20setting%20the%20number%20of%20storage%20targets%20to%20one.%3C%2FEM%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1822588908%22%20id%3D%22toc-hId-1822588908%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId-15134445%22%20id%3D%22toc-hId-15134445%22%3EMetadata%20performance%3C%2FH2%3E%0A%3CP%3EThe%20number%20and%20type%20of%20metadata%20servers%20impacts%20the%20performance%20for%20metadata-sensitive%20I%2FO.%20For%20example%2C%20BeeOND%20has%20only%20one%20metadata%20server%20by%20default%2C%20so%20adding%20more%20metadata%20servers%20can%20significantly%20improve%20performance.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--1663237299%22%20id%3D%22toc-hId--1663237299%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20977px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F159094i36DDADD5E1C61849%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22Fig6-BeeONDmetadata.png%22%20title%3D%22Fig6-BeeONDmetadata.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FH3%3E%0A%3CP%3E%3CEM%3EFig.%206.%20BeeOND%20metadata%20performance%20can%20be%20significantly%20improved%20by%20increasing%20the%20number%20of%20metadata%20servers.%3C%2FEM%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-695192815%22%20id%3D%22toc-hId-695192815%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--1112261648%22%20id%3D%22toc-hId--1112261648%22%3ESummary%3C%2FH2%3E%0A%3CP%3EWith%20HPC%20on%20Azure%2C%20don%E2%80%99t%20just%20accept%20the%20configuration%20defaults%20in%20BeeGFS%20or%20BeeOND.%20Performance%20tuning%20makes%20a%20big%20difference%20in%20the%20I%2FO%20throughput.%20For%20more%20information%20about%20tuning%2C%20see%20%3CA%20href%3D%22https%3A%2F%2Fwww.beegfs.io%2Fwiki%2FMetaServerTuning%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3ETips%20and%20Recommendations%3C%2FA%3E%20in%20the%20BeeGFS%20documentation.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1015446%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.beegfs.io%2Fwiki%2FTableOfContents%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%20target%3D%22_blank%22%3EBeeGFS%3C%2FA%3E%20and%20BeeGFS%20On%20Demand%20(%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E)%20are%20popular%20parallel%20file%20systems%20used%20to%20handle%20the%20I%2FO%20requirements%20for%20many%20high%20performance%20computing%20(HPC)%20workloads.%20Both%20run%20well%20on%20Azure%2C%20and%20both%20can%20harness%20the%20performance%20of%20multiple%20solid%20state%20drives%20(SSDs)%20to%20provide%20high%20aggregate%20I%2FO%20performance%20.%20However%2C%20the%20default%20configuration%20settings%20are%20not%20optimal%20for%20all%20I%2FO%20patterns.%20This%20post%20provides%20some%20tuning%20suggestions%20to%20improve%20BeeGFSand%20BeeOND%20performance%20for%20a%20number%20of%20specific%20I%2FO%20patterns.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1015446%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Ebeegfs%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EHPC%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Etuning%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

BeeGFS and BeeGFS On Demand (BeeOND) are popular parallel file systems used to handle the I/O requirements for many high performance computing (HPC) workloads. Both run well on Azure, and both can harness the performance of multiple solid state drives (SSDs) to provide high aggregate I/O performance. However, the default configuration settings are not optimal for all I/O patterns. This post provides some tuning suggestions to improve BeeGFSand BeeOND performance for a number of specific I/O patterns.

 

The following tuning options are explored and examples of how they are tuned for different I/O patterns are discussed:

  • Number of managed disk or number of local NVMe SSDs per VM.
  • BeeGFS chunk size and number of targets
  • Number of metadata servers.

 

Deploying BeeGFS and BeeOND

The azurehpc github repository contains scripts to automatically deploy BeeGFS and BeeOND parallel file systems on Azure. (See the azurehpc/examples directory.)

 

git clone git@github.com:Azure/azurehpc.git

How and where you deploy BeeGFS in relation to the compute node affects latency, so make sure they are in close proxmity to one another. Follow these pointers:

  • If you are using a Virtual Machine Scale Set to deploy BeeGFS or a compute cluster, make sure that singleplacementgroup is set to true.
  • If you are using managed disks for BeeGFS, set the vm tag to Performance Optimization, which ensures that the managed disks are close to the VMs.
  • Include the BeeGFS cluster and the compute nodes in the same proximity placement group. This guarantees that all resources in the group are in the same datacenter.

 

Disk layout considerations

When deploying a BeeGFS or BeeOND parallel file system on Azure using managed disks for storage, the number and type of disks need to be carefully considered. The VM disks throughput and IOPS limits dictate the maximum amount of I/O that each VM in the parallel file system can support. Azure managed disks have built-in redundancy. At a minimum, locally redundant storage (LRS) keeps three copies of data, and so RAID 0 striping is enough.

 

In general, for large throughput, striping more slower disks than fewer faster disks is preferred. As Figure 1 shows, eight P20 disks gives better performance than four P30 disks, while eight P30 disks does not improve performance because of the VM (E32s_v3) disk throughput throttling limit.

 

Fig1-StripingDisks.png

Fig. 1. Striping a larger number of slower disks (e.g. 8xP20) gives better throughput than striping fewer faster disks (e.g. 3xP30). Striping more P30 disks (e.g. 8xP30) does not improve the performance because of disk throughput throttling limits.

 

In the case of local NVMe SSDs (such as Lsv2), BeeGFS storage targets that have a single NVMe disk give the best aggregate throughput and IOPS. For very large BeeGFS parallel file systems, striping more NVMe disks per storage target can simplify the management and deployment.

 

Note that the performance of a BeeGFS parallel file system that uses ephemeral NVMe disks (such as LSv2) is limited by the VM network throttling limits. In other words, I/O performance is network-bound.

 

Fig2-PerformanceOfReads.png

Fig. 2. Performance of reads and writes varies by number of NVMes per VM. The best aggregate throughput for BeeGFS was using L8sv2.

 

Fig3-SingleNVMe.png 

Fig. 3. A single NVMe per VM may gives better aggregate I0PS performance.

 

Chunk size and number of targets

BeeGFS and BeeOND parallel file systems consist of several storage targets. Typically, each VM is a storage target. It can use managed disks or local NVMe SSDs, and it can be striped with RAID 0. The chunk size is the amount of data sent to each target, to be processed in parallel. The default chunk size is 512 K, and the default number of storage targets is four as the following script shows:

 

beegfs-ctl –getentryinfo /beegfs
EntryID: root
Metadata buddy group: 1
Current primary metadata node: cgbeegfsserver-4 [ID: 1]
Stripe pattern details:
+ Type: RAID0
+ Chunksize: 512K
+ Number of storage targets: desired: 4
+ Storage Pool: 1 (Default)

Fortunately, you can change the chunk size and number of storage targets for a directory. This means that you can tune each directory for a different I/O pattern by adjusting the chunk size and number of targets.


For example, to set the chunk size to 1 MB and number of storage targets to 8:

 

beegfs-ctl --setpattern --chunksize=1m --numtargets=8 /beegfs/chunksize_1m_4t

 A rough formula for setting the optimal chunksize is:

 

Chunksize = I/O Blocksize / Number of targets

 

Number of metadata servers

If you set up BeeGFS or BeeOND parallel file systems that do a lot of metadata operations (such as creating files, deleting files, stat operations on files, and checking file attributes and permissions), you must choose the right number and type of metadata servers to get the best performance. By default, BeeOND configures only one metadata server no matter how many nodes are in your BeeOND parallel file system. Fortunately, you can change the default number of metadata servers used in BeeOND.


To increase the number of metadata servers in a BeeOND parallel file system to 4:

 

beeond start -P -m 4 -n hostfile -d /mnt/resource/BeeOND -c /mnt/BeeOND

 

Large Shared Parallel I/O

The shared parallel I/O format is a common in HPC in which all parallel processes write/read to/from the same file (N-1 I/O). Examples of this format are MPIO, HDF5, and netcdf. The default settings in BeeGFS and BeeOND are not optimal for this I/O pattern, and you can improve performance significantly by increasing the number of storage targets.


It is also important to be aware that the number of targets used for shared parallel I/O will limit the size of the file.

 

Fig4-TuningBeeOND.png

Fig. 4. By tuning BeeOND (increasing the number of storage targets), we get significant improvements in I/O performance for HDF5 (shared parallel I/O format).

 

Small I/O (high IOPS)

BeeGFS and BeeOND are primarily designed for large throughput I/O, but there are times when the I/O pattern of interest is high IOPS. The default configuration is not ideal for this I/O pattern, but some tuning modification can improve the performance. To improve IOPS, it is usually best to reduce the chunk size to the minimum of 64 K (assuming the I/O blocksize is less than 64 K) and reduce the number of targets to one.

 

beegfs_tuning_fig3.png

Fig. 5. BeeGFS and BeeOND IOPS performance is improved by reducing the chunk size to 64 K and setting the number of storage targets to one.

 

Metadata performance

The number and type of metadata servers impacts the performance for metadata-sensitive I/O. For example, BeeOND has only one metadata server by default, so adding more metadata servers can significantly improve performance.

 

Fig6-BeeONDmetadata.png

Fig. 6. BeeOND metadata performance can be significantly improved by increasing the number of metadata servers.

 

Summary

With HPC on Azure, don’t just accept the configuration defaults in BeeGFS or BeeOND. Performance tuning makes a big difference in the I/O throughput. For more information about tuning, see Tips and Recommendations in the BeeGFS documentation.