Home
%3CLINGO-SUB%20id%3D%22lingo-sub-837843%22%20slang%3D%22en-US%22%3EHealth%20checks%20for%20HPC%20workloads%20on%20Microsoft%20Azure%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-837843%22%20slang%3D%22en-US%22%3E%3CH2%20id%3D%22toc-hId-1819342973%22%20id%3D%22toc-hId-1819342973%22%3EIntroduction%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CDIV%3E%3CSPAN%3EMany%20HPC%20applications%20are%20highly%20parallel%20and%20have%20tightly%20coupled%20communication%2C%20meaning%20that%20during%20an%20applications%20parallel%20simulation%20run%2C%20all%20parallel%20processes%20must%20communicate%20with%20each%20other%20frequently.%20These%20types%20of%20applications%20usually%20perform%20best%20when%20the%20inter-communication%20between%20the%20parallel%20processes%20is%20done%20on%20high%20bandwidth%2Flow%20latency%20networks%20like%20InfiniBand.%20The%20tightly%20coupled%20nature%20of%20these%20applications%20means%20that%20if%20a%20single%20VM%20is%20not%20functioning%20optimally%2C%20then%20it%20may%20cause%20the%20job%20to%20have%20an%20impaired%20performance.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3EThe%20purpose%20of%20these%20checks%2Ftests%20is%20to%20assist%20you%20in%20quickly%20identifying%20a%20non-optimal%20node%2C%20so%20it%26nbsp%3Bcan%20be%20excluded%20from%20a%20parallel%20job.%20If%20your%20job%20needs%20an%20exact%20number%20of%20parallel%20processes%2C%20a%20slight%20overprovision%20is%20a%20good%20practice%2C%20just%20in%20case%20you%20find%20a%20few%20nodes%20that%20you%20need%20to%20exclude.%26nbsp%3B%3C%2FSPAN%3E%3C%2FDIV%3E%0A%3CDIV%3E%3CBR%20%2F%3E%3CSPAN%3EHB%20and%20HC%20SKUs%20were%20specifically%20designed%20for%20HPC%20applications.%20They%20have%20InfiniBand%20(EDR)%20networks%2C%20high%20floating-point%20performance%2C%20and%20high-memory%20bandwidths.%20The%20tests%2Fchecks%20described%20here%20are%20specifically%20designed%20for%20HB%20and%20HC%20SKUs.%20It%20is%20a%20good%20practice%20to%20run%20these%20checks%2Ftests%20prior%20to%20running%20a%20parallel%20job%20(especially%20for%20large%20parallel%20jobs).%3C%2FSPAN%3E%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%3EHow%20to%20access%20the%20test%2Fcheck%20scripts%20%3CBR%20%2F%3E%3CBR%20%2F%3Egit%20clone%20git%40github.com%3AAzure%2Fazurehpc.git%20%3C%2FPRE%3E%0A%3CP%3ENote%3A%20Scripts%20will%20be%20in%20the%20apps%2Fhealth_checks%20directory.%20%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--732813988%22%20id%3D%22toc-hId--732813988%22%3ETests%2FChecks%3C%2FH2%3E%0A%3CH3%20id%3D%22toc-hId-813482842%22%20id%3D%22toc-hId-813482842%22%3E%26nbsp%3B%3C%2FH3%3E%0A%3CH3%20id%3D%22toc-hId--1738674119%22%20id%3D%22toc-hId--1738674119%22%3ECheck%20the%20InfiniBand%20network%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20test%20is%20used%20to%20identify%20if%20there%20is%20an%20unexpected%20issue%20with%20the%20InfiniBand%20network.%20This%20test%20runs%20a%20network%20bandwidth%20test%20on%20pairs%20of%20compute%20nodes%20(one%20process%20running%20on%20each%20compute%20node).%20A%20hostfile%20contains%20a%20list%20of%20all%20the%20nodes%20to%20be%20tested.%20The%20pairs%20of%20nodes%20are%20grouped%20in%20a%20ring%20format.%20For%20example%2C%20if%20the%20hostfile%20contained%204%20hosts%20(A%2C%20B%2C%20C%2C%20%26amp%3B%20D)%2C%20the%204%20node%20pairs%20tested%20would%20be%20(A%2CB)%2C(B%2CC)%2C%20(C%2CD)%2C%20and%20(D%2CA).%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3CBR%20%2F%3EA%20bad%20node%20can%20be%20identified%20by%20a%20node%20pair%20test%20failing%2Fnot%20running%20or%20underperforming%20(measured%20network%20bandwidth%20%26lt%3B%26lt%3B%20the%20expected%20network%20bandwidth).%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--192377289%22%20id%3D%22toc-hId--192377289%22%3EProcedure%3A%3C%2FH4%3E%0A%3COL%3E%0A%3CLI%3EDownload%20the%20osu%20benchmark%20suite%3A%26nbsp%3B%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%3CA%20href%3D%22http%3A%2F%2Fmvapich.cse.ohio-state.edu%2Fdownload%2Fmvapich%2Fosu-micro-benchmarks-5.6.1.tar.gz%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttp%3A%2F%2Fmvapich.cse.ohio-state.edu%2Fdownload%2Fmvapich%2Fosu-micro-benchmarks-5.6.1.tar.gz%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3EBuild%2Finstall%20osu%20micro-benchmark%20suite.%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3Emodule%20load%20mpi%2Fmvapich2-2.3.1%3C%2FLI%3E%0A%3CLI%3Econfigure%20%E2%80%93prefix%3D%2Flocation%2Fyou%2Fwant%2Fto%2Finstall%20CC%3D%2Fopt%2Fmvapich2-2.3.1%2Fbin%2Fmpicc%20CXX%3D%2Fopt%2Fmvapich2-2.3.1%2Fbin%2Fmpicxx%3C%2FLI%3E%0A%3CLI%3Emake%3C%2FLI%3E%0A%3CLI%3Emake%20install%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3E%0A%3CPRE%3E%3CSTRONG%3Erun_ring_osu_bw.sh%20%5B%2Ffull%2Fpath%2Fto%2Fhostlist%5D%20%5B%2Ffull%2Fpath%2Fto%2Fosu_bw%5D%20%5B%2Ffull%2Fpath%2Fto%2FOUTPUT_DIR%5D%3C%2FSTRONG%3E%3C%2FPRE%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3EThe%20first%20script%20parameter%20is%20the%20full%20path%20to%20the%20hostlist%2C%20which%20should%20have%20a%20single%20hostname%20or%20IP%20address%20per%20line.%0A%3CPRE%3EHost1%20%3CBR%20%2F%3E%3CBR%20%2F%3EHost2%20%3CBR%20%2F%3E%3CBR%20%2F%3EHost3%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3CLI%3EThe%20second%20script%20parameter%20is%20the%20full%20path%20to%20the%20osu_bw%20executable%20that%20you%20built%20in%20step%202.%3C%2FLI%3E%0A%3CLI%3EThe%20third%20script%20parameter%20is%20the%20full%20path%20to%20the%20output%20directory.%20This%20is%20the%20location%20of%20the%20resulting%20output%20from%20this%20test.%3C%2FLI%3E%0A%3CLI%3EThese%20pairwise%20pt-to-pt%20benchmarks%20run%20serially%20(each%20test%20%26lt%3B20s)%2C%20so%20the%20total%20test%20time%20would%20depend%20on%20how%20many%20nodes%20are%20in%20the%20hostlist%20file.%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3EA%20number%20of%20files%20will%20be%20created%20for%20each%20node-pair%20tested.%20An%20output%20report%20will%20also%20be%20generated%20in%20the%20OUTPUT_DIR%20directory%20called%20%E2%80%9Cosu_bw_report.log_PID%E2%80%9D.%20The%20second%20column%20is%20IB%20bandwidth%20numbers%20in%20MB%2Fs%20(ascending%20order).%20Any%20numbers%20%26lt%3B%26lt%3B%207000%20should%20be%20reported%20and%20removed%20from%20your%20hostlist.%20(The%20slowest%20test%20results%20will%20be%20at%20the%20top%20of%20this%20file.)%20If%20any%20of%20the%20node%20pair%20tests%20failed%20(the%20file%20size%20is%20zero%2C%20or%20it%20contains%20an%20error)%2C%20report%20those%20nodes%20and%20remove%20them%20from%20your%20hostlist%20before%20running%20your%20parallel%20job.%0A%3CPRE%3E10.32.4.211_to_10.32.4.213_osu_bw.log_68076%3A4194304%207384.99%20%3CBR%20%2F%3E%3CBR%20%2F%3E10.32.4.248_to_10.32.4.249_osu_bw.log_68076%3A4194304%207390.99%20%3CBR%20%2F%3E%3CBR%20%2F%3E10.32.4.142_to_10.32.4.143_osu_bw.log_68076%3A4194304%207394.00%20%3CBR%20%2F%3E%3CBR%20%2F%3E10.32.4.174_to_10.32.4.175_osu_bw.log_68076%3A4194304%207400.52%20%3CBR%20%2F%3E%3CBR%20%2F%3E10.32.4.194_to_10.32.4.195_osu_bw.log_68076%3A4194304%207407.01%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CH3%20id%3D%22toc-hId-1746946551%22%20id%3D%22toc-hId-1746946551%22%3E%26nbsp%3B%26nbsp%3B%3C%2FH3%3E%0A%3CH3%20id%3D%22toc-hId--805210410%22%20id%3D%22toc-hId--805210410%22%3ECheck%20all%20the%20compute%20nodes%20memory%3C%2FH3%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20test%20will%20help%20identify%20problematic%20memory%20dimms%20%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20%23ffffff%3B%20color%3A%20%23333333%3B%20cursor%3A%20text%3B%20font-family%3A%20inherit%3B%20font-size%3A%2016px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20300%3B%20letter-spacing%3A%20normal%3B%20line-height%3A%201.7142%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3E(for%20example%2C%20dimms%20that%20are%20failing%20or%20underperforming)%3C%2FSPAN%3E.%20The%20STREAM%20benchmark%20is%20used%20for%20this%20test%2C%20which%20measures%20the%20memory%20bandwidth%20on%20each%20compute%20node.%20The%20STREAM%20benchmark%20is%20run%20on%20each%20compute%20node%20in%20parallel.%20Bad%20memory%20on%20a%20compute%20node%20is%20identified%20by%20the%20STREAM%20benchmark%20failing%20or%20the%20measured%20memory%20bandwidth%20%26lt%3B%26lt%3B%20expected%20memory%20bandwidth.%20%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId-741086420%22%20id%3D%22toc-hId-741086420%22%3EProcedure%3A%3C%2FH4%3E%0A%3COL%3E%0A%3CLI%3EGet%20the%20stream%20code%20from%20%3CA%20href%3D%22http%3A%2F%2Fwww.cs.virginia.edu%2Fstream%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ewww.cs.virginia.edu%2Fstream%2F%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3EBuild%20stream%20with%20the%20Intel%20mpi%20compiler%3A%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Eicc%20-o%20stream.intel%20stream.c%20-DSTATIC%20-DSTREAM_ARRAY_SIZE%3D3200000000%20-mcmodel%3Dlarge%20-shared-intel%20-Ofast%20-qopenmp%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3E%0A%3CPRE%3EWCOLL%3Dhostlist%20pdsh%20%2Fpath%2Fto%2Frun_stream_bw.sh%20%5B%2Ffull%2Fpath%2Fto%2Fintel%2Fcompilervars.sh%5D%20%5B%2Ffull%2Fpath%2Fto%2Fstream%5D%20%5B%2Ffull%2Fpath%2Fto%2FOUTPUT_DIR%5D%20%3C%2FPRE%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3EThe%20first%20script%20parameter%20%E2%80%9C%2Ffull%2Fpath%2Fto%2Fintel%2Fcompilervars.sh%E2%80%9D%20is%20the%20location%20of%20%E2%80%9Ccompilevar.sh%E2%80%9D%20script%20in%20the%20Intel%20compiler%20environment%2C%20which%20will%20be%20sourced%20to%20set-up%20the%20correct%20Intel%20compiler%20environment.%3C%2FLI%3E%0A%3CLI%3EThe%20second%20parameter%20%E2%80%9C%2Ffull%2Fpath%2Fto%2Fstream%E2%80%9D%20is%20the%20full%20path%20to%20the%20stream%20executable%2C%20which%20was%20built%20in%20step%202.%3C%2FLI%3E%0A%3CLI%3EThe%20third%20parameter%20%E2%80%9C%2Ffull%2Fpath%2Fto%2FOUTPUT_DIR%E2%80%9D%20is%20the%20full%20path%20to%20the%20directory%20location%20where%20the%20resulting%20output%20from%20running%20this%20test%20will%20be%20deposited.%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3EA%20test%20summary%20report%20can%20be%20generated%20by%20running%20this%20script%3A%26nbsp%3B%3CBR%20%2F%3E%0A%3COL%20style%3D%22list-style-type%3A%20lower-alpha%3B%22%3E%0A%3CLI%3E%0A%3CPRE%3Ereport_stream.sh%20%5B%2Ffull%2Fpath%2Fto%2FOUTPUT_DIR%5D%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3C%2FLI%3E%0A%3CLI%3E%0A%3CP%3EThe%20stream%20test%20report%20%E2%80%9Cstream_report.log_PID%E2%80%9D%20lists%20the%20stream%20benchmark%20result%20for%20each%20node%20in%20ascending%20order%20(the%20slowest%20results%20will%20be%20at%20the%20top%20of%20the%20file).%20The%20second%20column%20gives%20the%20node%20memory%20bandwidth%20in%20MB%2Fs.%20For%20Hb%20any%20memory%20bandwidth%20%26lt%3B%26lt%3B~220%20GB%2Fs%20(and%20for%20Hc%20memory%20bandwidth%20%26lt%3B%26lt%3B%20180%20GB%2Fs)%20should%20be%20reported%20and%20removed%20from%20your%20hostlist.%20Any%20node%20on%20which%20this%20tests%20fails%20should%20also%20be%20reported%20and%20removed%20from%20your%20hostlist.%3C%2FP%3E%0A%3CPRE%3Ecgbbv300c00009L%2Fstream.out_27138%3ATriad%3A%20231227.8%200.084653%200.083035%200.086457%20%3CBR%20%2F%3E%3CBR%20%2F%3Ecgbbv300c00009B%2Fstream.out_27363%3ATriad%3A%20233946.3%200.084680%200.082070%200.095031%20%3CBR%20%2F%3E%3CBR%20%2F%3Ecgbbv300c0000BR%2Fstream.out_28519%3ATriad%3A%20234140.8%200.083516%200.082002%200.084803%20%3CBR%20%2F%3E%3CBR%20%2F%3Ecgbbv300c00009O%2Fstream.out_26951%3ATriad%3A%20234578.7%200.082362%200.081849%200.083965%20%3CBR%20%2F%3E%3CBR%20%2F%3Ecgbbv300c00009U%2Fstream.out_27276%3ATriad%3A%20234736.0%200.083303%200.081794%200.086764%20%3C%2FPRE%3E%0A%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CH2%20id%3D%22toc-hId--1418043531%22%20id%3D%22toc-hId--1418043531%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId-324766804%22%20id%3D%22toc-hId-324766804%22%3ESummary%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EA%20single%20parallel%20HPC%20workload%20may%20require%20many%20compute%20nodes%20for%20the%20job%20to%20complete%20in%20a%20reasonable%20time.%20If%20one%20of%20the%20compute%20nodes%20is%20configured%20incorrectly%20or%20has%20a%20sub-par%20performance%2C%20then%20it%20may%20impact%20the%20overall%20performance%20for%20a%20parallel%20job.%20We%20have%20some%20checks%2Ftests%20which%20will%20help%20identify%20the%20problems%2C%20but%20to%20ensure%20your%20nodes%20are%20configured%20correctly%2C%20we%20strongly%20recommend%20you%20run%20these%20checks%2Ftests%20before%20running%20any%20large%20parallel%20job.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-837843%22%20slang%3D%22en-US%22%3E%3CP%3EMany%20HPC%20applications%20are%20highly%20parallel%20and%20have%20tightly%20coupled%20communication%2C%20meaning%20that%20during%20an%20applications%20parallel%20simulation%20run%2C%20all%20parallel%20processes%20must%20communicate%20with%20each%20other%20frequently.%20These%20types%20of%20applications%20usually%20perform%20best%20when%20the%20inter-communication%20between%20the%20parallel%20processes%20in%20done%20on%20high%20bandwidth%2Flow%20latency%20networks%20like%20InfiniBand.%20The%20tightly%20coupled%20nature%20of%20these%20applications%20means%20that%20if%20a%20single%20VM%20is%20not%20functioning%20correctly%20(e.g.%20bad%2Fslow%20memory%20dimms%2C%20InfiniBand%20interface%20is%20down%2Funderperforming%20etc)%20this%20will%20cause%20the%20job%20to%20fail%20at%20start-up%20(or%20during%20a%20run)%20or%20the%20job%20completes%20with%20severely%20impaired%20performance.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-837843%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Ehealth%20checks%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EHPC%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EInfiniband%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

Introduction

 

Many HPC applications are highly parallel and have tightly coupled communication, meaning that during an applications parallel simulation run, all parallel processes must communicate with each other frequently. These types of applications usually perform best when the inter-communication between the parallel processes is done on high bandwidth/low latency networks like InfiniBand. The tightly coupled nature of these applications means that if a single VM is not functioning optimally, then it may cause the job to have an impaired performance. The purpose of these checks/tests is to assist you in quickly identifying a non-optimal node, so it can be excluded from a parallel job. If your job needs an exact number of parallel processes, a slight overprovision is a good practice, just in case you find a few nodes that you need to exclude. 

HB and HC SKUs were specifically designed for HPC applications. They have InfiniBand (EDR) networks, high floating-point performance, and high-memory bandwidths. The tests/checks described here are specifically designed for HB and HC SKUs. It is a good practice to run these checks/tests prior to running a parallel job (especially for large parallel jobs).

 

How to access the test/check scripts 

git clone git@github.com:Azure/azurehpc.git

Note: Scripts will be in the apps/health_checks directory.

Tests/Checks

 

Check the InfiniBand network

 

This test is used to identify if there is an unexpected issue with the InfiniBand network. This test runs a network bandwidth test on pairs of compute nodes (one process running on each compute node). A hostfile contains a list of all the nodes to be tested. The pairs of nodes are grouped in a ring format. For example, if the hostfile contained 4 hosts (A, B, C, & D), the 4 node pairs tested would be (A,B),(B,C), (C,D), and (D,A).

 
A bad node can be identified by a node pair test failing/not running or underperforming (measured network bandwidth << the expected network bandwidth).

 

Procedure:

  1. Download the osu benchmark suite: 
    1. http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.6.1.tar.gz
  2. Build/install osu micro-benchmark suite.
    1. module load mpi/mvapich2-2.3.1
    2. configure –prefix=/location/you/want/to/install CC=/opt/mvapich2-2.3.1/bin/mpicc CXX=/opt/mvapich2-2.3.1/bin/mpicxx
    3. make
    4. make install
  3. run_ring_osu_bw.sh [/full/path/to/hostlist] [/full/path/to/osu_bw] [/full/path/to/OUTPUT_DIR]
    1. The first script parameter is the full path to the hostlist, which should have a single hostname or IP address per line.
      Host1 

      Host2

      Host3
    2. The second script parameter is the full path to the osu_bw executable that you built in step 2.
    3. The third script parameter is the full path to the output directory. This is the location of the resulting output from this test.
    4. These pairwise pt-to-pt benchmarks run serially (each test <20s), so the total test time would depend on how many nodes are in the hostlist file.
  4. A number of files will be created for each node-pair tested. An output report will also be generated in the OUTPUT_DIR directory called “osu_bw_report.log_PID”. The second column is IB bandwidth numbers in MB/s (ascending order). Any numbers << 7000 should be reported and removed from your hostlist. (The slowest test results will be at the top of this file.) If any of the node pair tests failed (the file size is zero, or it contains an error), report those nodes and remove them from your hostlist before running your parallel job.
    10.32.4.211_to_10.32.4.213_osu_bw.log_68076:4194304 7384.99 

    10.32.4.248_to_10.32.4.249_osu_bw.log_68076:4194304 7390.99

    10.32.4.142_to_10.32.4.143_osu_bw.log_68076:4194304 7394.00

    10.32.4.174_to_10.32.4.175_osu_bw.log_68076:4194304 7400.52

    10.32.4.194_to_10.32.4.195_osu_bw.log_68076:4194304 7407.01

  

Check all the compute nodes memory

 

This test will help identify problematic memory dimms (for example, dimms that are failing or underperforming). The STREAM benchmark is used for this test, which measures the memory bandwidth on each compute node. The STREAM benchmark is run on each compute node in parallel. Bad memory on a compute node is identified by the STREAM benchmark failing or the measured memory bandwidth << expected memory bandwidth.  

 

Procedure:

  1. Get the stream code from www.cs.virginia.edu/stream/
  2. Build stream with the Intel mpi compiler:
    1. icc -o stream.intel stream.c -DSTATIC -DSTREAM_ARRAY_SIZE=3200000000 -mcmodel=large -shared-intel -Ofast -qopenmp 
  3. WCOLL=hostlist pdsh /path/to/run_stream_bw.sh [/full/path/to/intel/compilervars.sh] [/full/path/to/stream] [/full/path/to/OUTPUT_DIR] 
    1. The first script parameter “/full/path/to/intel/compilervars.sh” is the location of “compilevar.sh” script in the Intel compiler environment, which will be sourced to set-up the correct Intel compiler environment.
    2. The second parameter “/full/path/to/stream” is the full path to the stream executable, which was built in step 2.
    3. The third parameter “/full/path/to/OUTPUT_DIR” is the full path to the directory location where the resulting output from running this test will be deposited.
  4. A test summary report can be generated by running this script: 
    1. report_stream.sh [/full/path/to/OUTPUT_DIR] 
  5. The stream test report “stream_report.log_PID” lists the stream benchmark result for each node in ascending order (the slowest results will be at the top of the file). The second column gives the node memory bandwidth in MB/s. For Hb any memory bandwidth <<~220 GB/s (and for Hc memory bandwidth << 180 GB/s) should be reported and removed from your hostlist. Any node on which this tests fails should also be reported and removed from your hostlist.

    cgbbv300c00009L/stream.out_27138:Triad: 231227.8 0.084653 0.083035 0.086457 

    cgbbv300c00009B/stream.out_27363:Triad: 233946.3 0.084680 0.082070 0.095031

    cgbbv300c0000BR/stream.out_28519:Triad: 234140.8 0.083516 0.082002 0.084803

    cgbbv300c00009O/stream.out_26951:Triad: 234578.7 0.082362 0.081849 0.083965

    cgbbv300c00009U/stream.out_27276:Triad: 234736.0 0.083303 0.081794 0.086764

 

Summary

 

A single parallel HPC workload may require many compute nodes for the job to complete in a reasonable time. If one of the compute nodes is configured incorrectly or has a sub-par performance, then it may impact the overall performance for a parallel job. We have some checks/tests which will help identify the problems, but to ensure your nodes are configured correctly, we strongly recommend you run these checks/tests before running any large parallel job.