This is an exciting week with the International Supercomputing Conference even though I am not attending. But, to get in the spirit, and, after seeing this article about how the entire list is now faster than 1 petaflop, I did wonder whether I could make a cluster on Azure worthy of the "Petaflop Club". As it happened I had a few nodes lying around after doing some large scale CP2K runs.
Azure has recently launched two new types of VMs suitable for HPC. For this experiment the Hc node was used which comprises of dual socket Intel Xeon Platinum 8168 nodes connected with 100 Gb/sec EDR InfiniBand from Mellanox. In order to meet the petaflop challenge a cluster containing 512 nodes was used. By my calculations this should be a peak of 1.369 PFlop/s:
Rpeak (GFlops/s) = <frequency> * <cores-per-node> * <nodes> * <flops-per-cycle> = 1.9 * 44 * 512 * 32 = 1369702.4
Notes:
The entire cluster uses the CentOS 7.6 HPC Azure market place image with the only addition being the "intel-mkl-2019" package where Linpack was taken from. Linpack was run with Intel MPI 2018 that is included in the image. This was only ever going to be a quick test and so I chose 32GB of RAM to be used per node and a problem size of 1,482,240 and it was run with two ranks per node. Here are the results:
================================================================================ T/V N NB P Q Time Gflops -------------------------------------------------------------------------------- WC00C2R2 1482240 384 32 32 1816.51 1.19516e+06 HPL_pdgesv() start time Wed Jun 19 13:32:59 2019 HPL_pdgesv() end time Wed Jun 19 14:03:16 2019 -------------------------------------------------------------------------------- ||Ax-b||_oo/(eps*(||A||_oo*||x||_oo+||b||_oo)*N)= 0.0010026 ...... PASSED ================================================================================
This shows Azure is definitely a strong contender for the "Petaflop Club" 🙂 In fact, the score of 1.195 PFlop/s ranks it in 368th place with the latest list.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.