Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1409588%22%20slang%3D%22en-US%22%3EMicrosoft%20and%20SchedMD%20partner%20to%20bring%20Slurm%20into%20Azure%20CycleCloud%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1409588%22%20slang%3D%22en-US%22%3E%3CP%20class%3D%22lia-align-justify%22%3EMicrosoft%20Azure%20is%20committed%20to%20providing%20a%20world%20class%20HPC%20platform%20for%20our%20customers.%20Over%20the%20last%20year%2C%20we%20have%20demonstrated%20this%20commitment%20with%20the%20roll%20out%20of%20new%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fintroducing-the-new-hbv2-azure-virtual-machines-for-high-performance-computing%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EHPC%20hardware%20offerings%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fhpc-cache%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Estorage%20options%3C%2FA%3E%20that%20rival%20those%20in%20any%20supercomputing%20center.%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Ffeatures%2Fazure-cyclecloud%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20CycleCloud%3C%2FA%3E%20is%20designed%20to%20help%20our%20HPC%20customers%20orchestrate%20these%20HPC%20VMs%20and%20build%20cloud%20clusters%20in%20a%20way%20that%20mirror%20their%20on-premises%20systems%20they%20are%20familiar%20with%2C%20yet%20provide%20the%20elasticity%20to%20right-size%20the%20clusters%20based%20on%20the%20workloads.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EOne%20of%20the%20benefits%20CycleCloud%20brings%20to%20users%20is%20that%20they%20get%20to%20keep%20working%20with%20the%20scheduling%20environment%20they've%20been%20using%20for%20years%2C%20sometimes%20decades.%20One%20scheduler%20we%20have%20seen%20increasing%20demand%20for%20over%20the%20last%20year%20is%20Slurm%2C%20an%20open-source%20workload%20manager%20that%20has%20been%20maintained%20and%20developed%20by%20SchedMD%20and%20capable%20of%20scaling%20to%20meet%20the%20demands%20of%20even%20the%20largest%20HPC%20workloads.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EWe%20have%20partnered%20with%20-ERR%3AREF-NOT-FOUND-SchedMD%20to%20deliver%20the%20best%20user%20experience%20for%20Azure%20HPC%20customers.%20Utilizing%20Slurm's%20elastic%20compute%20capability%20and%20its%20topology%20awareness%2C%20CycleCloud%20is%20able%20to%20orchestrate%20VMs%20for%20a%20Slurm%20cluster%20such%20that%20jobs%20are%20scheduled%20on%20the%20appropriate%20VMs%20according%20to%20their%20resource%20requirements.%20For%20example%2C%20tightly-coupled%20MPI%20tasks%20land%20on%20partitions%20with%20nodes%20on%20the%20same%20InfiniBand%20fabric%2C%20while%20non-MPI%20tasks%20can%20use%20a%20separate%20partition%20designed%20for%20scale%20across%20multiple%20VM%20families.%20This%20is%20particularly%20helpful%20for%20multi-stage%20workloads%20or%20shared%20%22community%22%20clusters%20with%20multiple%20user%20groups.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EWith%20that%20in%20mind%2C%20every%20CycleCloud%20installation%20includes%20a%20Slurm%20cluster%20template%20with%20two%20partitions%20pre-defined%3A%20one%20for%20MPI%20workloads%2C%20i.e.%20high%20performance%20or%20%22HPC%22%2C%20and%20one%20for%20distributed%2C%20high%20throughput%20or%20%22HTC%22%20workloads.%20This%20template%20represents%20an%20initial%20configuration%2C%20and%20modifiable%20to%20include%20any%20number%20of%20partitions%2C%20VM%20types%2C%20and%20autoscale%20limits.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EFor%20more%20information%20on%20deploying%20Slurm%20clusters%20with%20CycleCloud%2C%20visit%20the%20CycleCloud%20-ERR%3AREF-NOT-FOUND-documentation%20or%20contact%20your%20Azure%20account%20team.%20For%20help%20customizing%20and%20configuring%20Slurm%2C%20or%20enterprise%20Slurm%20support%2C%20please%20contact%20-ERR%3AREF-NOT-FOUND-SchedMD.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22SlurmCycleCloudBlog.png%22%20style%3D%22width%3A%20975px%3B%22%3E-ERR%3AREF-NOT-FOUND-%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3EAzure%20CycleCloud%20running%20a%20Slurm%20cluster%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1409588%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20HPC%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

Microsoft Azure is committed to providing a world class HPC platform for our customers. Over the last year, we have demonstrated this commitment with the roll out of new HPC hardware offerings and storage options that rival those in any supercomputing center. Azure CycleCloud is designed to help our HPC customers orchestrate these HPC VMs and build cloud clusters in a way that mirror their on-premises systems they are familiar with, yet provide the elasticity to right-size the clusters based on the workloads.

 

One of the benefits CycleCloud brings to users is that they get to keep working with the scheduling environment they've been using for years, sometimes decades. One scheduler we have seen increasing demand for over the last year is Slurm, an open-source workload manager that has been maintained and developed by SchedMD and capable of scaling to meet the demands of even the largest HPC workloads.

 

We have partnered with SchedMD to deliver the best user experience for Azure HPC customers. Utilizing Slurm's elastic compute capability and its topology awareness, CycleCloud is able to orchestrate VMs for a Slurm cluster such that jobs are scheduled on the appropriate VMs according to their resource requirements. For example, tightly-coupled MPI tasks land on partitions with nodes on the same InfiniBand fabric, while non-MPI tasks can use a separate partition designed for scale across multiple VM families. This is particularly helpful for multi-stage workloads or shared "community" clusters with multiple user groups.

 

With that in mind, every CycleCloud installation includes a Slurm cluster template with two partitions pre-defined: one for MPI workloads, i.e. high performance or "HPC", and one for distributed, high throughput or "HTC" workloads. This template represents an initial configuration, and modifiable to include any number of partitions, VM types, and autoscale limits.

 

For more information on deploying Slurm clusters with CycleCloud, visit the CycleCloud documentation or contact your Azure account team. For help customizing and configuring Slurm, or enterprise Slurm support, please contact SchedMD.

 

Azure CycleCloud running a Slurm clusterAzure CycleCloud running a Slurm cluster