Altair ultraFluidX™ on Azure
Published Jan 25 2022 03:16 PM 3,173 Views
Microsoft

Simulation engineers looking for aerodynamic characteristics of passenger and heavy-duty vehicles can now use Altair ultraFluidX™ on Azure. With ultraFluidX on Azure, simulation can be sped up by 75 to 450 percent, depending upon the boundary conditions and the size of the model being analyzed, greatly accelerating product development processes in aerospace and automotive industries.

We tested ultraFluidX on Azure on two types of automotive models:

  • Altair Roadster (simple geometry scaling)
  • Altair CX1 (representative automotive production-level case)

gauharj_0-1643145407124.jpeg         gauharj_1-1643145407256.png

 

ROADSTER                                                                                         CX1

 

We ran both these models on our Azure NDv4 VMs and found that fluid dynamics simulations sped up significantly depending upon the model complexity and number of GPUs used. The ND A100 v4 series starts with a single virtual machine powered by eight NVIDIA A100 Tensor Core GPUs. ND A100 v4-based deployments can scale up to thousands of GPUs with 1.6 Tb/s of interconnect bandwidth per VM. Each GPU within the VM is provided with its own dedicated, topology-agnostic 200Gb/s link, via NVIDIA Quantum InfiniBand networking. These connections are automatically configured between VMs occupying the same virtual machine scale set, and support NVIDIA GPUDirect RDMA.

 

This infrastructure makes a big difference in speed-up of simulation for high-end applications like ultraFluidX. The total simulation consists of two phases, ultraFluidX runs a mostly CPU-based pre-processing phase and the GPU-based computation phase. We carried out the simulation to test the performance of the GPU on the ND A100 VM and found the following results:

 

Wall clock time in seconds: 

 

Model Name

#1 GPU

#2 GPU

#4 GPU

#8 GPU

Roadster

26 min

11sec

18 min

17 sec

12 min

11 sec

8 min

58 sec

CX1

NA

NA

111 min

18 sec

79 min

2 sec

 

Speed-up (Between GPUs):

 

Model Name

#1 GPU

#2 GPU

#4 GPU

#8 GPU

Roadster

1.00

1.43

2.15

2.91

CX1

NA

NA

1.00

1.41

 

Note: "NA" Denotes that the model requires more than 100GB of GPU memory, hence simulation can't perform with #1 GPU and #2 GPU.

 

gauharj_6-1643147101795.png

 

 

gauharj_7-1643147135957.png

 

 

 “We are excited to partner with Altair and bring ultraFluidX to Azure. This represents the latest successful joint initiative in a long-standing collaborative relationship between two strategic partners. With ultraFluidX on Azure, our shared automotive and manufacturing customers can more rapidly innovate by gaining real-world performance insights when running some of the largest CFD jobs on purpose-built GPU VMs powered by NVIDIA. We invite customers to explore ultraFluidX on Azure and gain from our collective experience.”

 

Kurt Niebuhr, Azure Compute HPC | AI Workload Incubation & Ecosystem Team Lead

 

"ultraFluidX is Altair's high-fidelity solution for transient external aerodynamics. Thanks to the native GPU implementation, ultraFluidX naturally leverages the massive power and memory bandwidth of modern GPUs. In combination with a powerful multi-GPU environment, such as Azure’s ND A100 V4 VMs powered by NVIDIA A100 Tensor Core GPUs, ultraFluidX enables the ultra-fast prediction of aerodynamic properties of passenger and heavy-duty vehicles, aero-acoustics, and building and environmental aerodynamics. Highly transient aerodynamics simulations now can be resolved overnight on a single server, which is a breakthrough for simulation-based design."

 

Christian F. Janßen, Director ultraFluidX

 

About Altair ultraFluidX:

ultraFluidX is a GPU-based Lattice Boltzmann solver that delivers unbeatable performance and overnight transient simulations of full vehicle aerodynamics on a single server. A couple of key features for ultraFluidX include automated meshing that allows for quick evaluation of hundreds of part configurations and efficient post processing features with a reduced memory footprint

Co-Authors
Version history
Last update:
‎Oct 24 2022 03:08 PM
Updated by: