Blog Post

Azure High Performance Computing (HPC) Blog
3 MIN READ

Siemens Simcenter™ STAR-CCM+™ on Azure HBv4

Hugo_Meiland's avatar
Hugo_Meiland
Icon for Microsoft rankMicrosoft
Jun 10, 2025

This article presents the performance results of running Simcenter STAR-CCM+ on Azure’s HBv4 virtual machines (VMs). Simcenter STAR-CCM+ is a multiphysics simulation tool that enables engineers to model and explore the possibilities of products in real-world situations using Computational Fluid Dynamics (CFD).

A suite of solvers is included in Simcenter STAR CCM+ for solving problems involving complex geometries and physical phenomena. The wide range of physics models includes CFD, computational solid mechanics, electromagnetics, heat transfer, multiphase flow, particle dynamics, reacting flow, electrochemistry, aeroacoustics, and rheology. One of the major applications, aerodynamics, can also be analyzed with Simcenter STAR-CCM+ and designs can be optimized to reduce drag and reduce weight.

Azure HBv4

The performance tests of Simcenter STAR-CCM+ are carried out using  HBv4-series VMs available on Azure.

The HBv4 is built around the AMD EPYC 9V33X (Genoa-X) CPU, which provides the base frequency of 2.4GHz (3.7GHz peak) and an astounding 176 non-hyper-threaded cores. The increased 768 GB of main memory (and even 1.4TB on the HX -series), together with 1.8TB of nvme local scratch and 400Gb/s Infiniband provide a well-balanced and increasingly powerful VM for these CFD workloads.

 

Simcenter STAR-CCM+ 2306 on single HBv4 VM

The performance results achieved by running Simcenter STAR-CCM+ in parallel on single-node Azure HBv4 VMs are presented below. Four models were considered for single-node tests. Based on the single node tests, a suitable VM configuration is selected for the multi-node runs.

Small models

These small models have a maximum of 20M cells and are therefore good representable models which will be run on single instances in production. This table shows the average elapsed time of 3 trials for each model for varying numbers of vCPUs on the Standard HBv4-series VM:

VM

cores/ vCPUs

average elapsed time(seconds)

Civil_20m

Lemans_17m

Reactor_9m

Hlmach_6m

Standard_HB176-24rs_v4

24

4.742

3.407

1.847

3.272

Standard_HB176-48rs_v4

48

2.422

1.798

0.948

1.706

Standard_HB176-96rs_v4

96

1.451

1.039

0.524

0.986

Standard_HB176-144rs_v4

144

1.225

0.848

0.408

0.788

Standard_HB176rs_v4

176

1.160

0.813

0.383

0.739

 

 

 

 

 

The following graph shows how the relative speed increases as the number of vCPUs increases:

The above graph shows the initial scaling, up to 144 cores being very strong. After that, the memory bandwidth appreas to become saturated and the continuing scaling is less strong. Since the price of the vm does not change for the amount of cores used, the most price-efficient way of running these models is by using the full 176 cores.

Siemens STAR-CCM+ 2306 on HBv4 cluster

The single-node tests carried out with Siemens STAR-CCM+ confirmed that the most price-efficient way of running the solvers is at 176 cores on a HBv4 VM. Therefore, we employed only 176 cores to evaluate the performance of Simcenter STAR-CCM+ with Standard_HB176rs_v4 when testing multi-node (cluster) configurations.

VMs

cores/

vCPU’s

average elapsed time(seconds)

Civil

20m

LeMans

17m

SUV

106m_S

SUV

106m_C

LeMans

100m_S

LeMans

100m_C

1

176

1.152

0.784

3.279

10.435

3.304

4.268

2

352

0.581

0.390

1.666

5.377

1.704

2.258

4

704

0.322

0.180

0.798

2.621

0.817

1.231

8

1408

0.180

0.109

0.365

1.335

0.389

0.690

16

2816

0.080

0.067

0.234

0.912

0.198

0.320

 

How to start running Simcenter STAR-CCM+ on Azure

To run your own CFD models using Simcenter STAR-CCM+ on Azure, use the Cyclecloud Workspace for Slurm (Overview of Azure CycleCloud Workspace for Slurm - Azure CycleCloud | Microsoft Learn) from the Azure Marketplace. This will help set up a Slurm HPC cluster in just a few clicks, on which you can install and run your CFD models.

 

 

 

 

 

 

 

 

Updated Jun 11, 2025
Version 2.0
No CommentsBe the first to comment