Oct 29 2020 09:57 AM - last edited on Feb 01 2021 04:26 PM by Eric Starker
Apologies, this question is rather general as I still need to get around prototyping this, but you experts may have a good feel for whether it's feasible or not, and where issues may crop up :)
Azure Batch is typically mentioned when searching around for running Monte Carlo simulations in Azure...but in terms of PaaS, would using the Fan Out pattern in Azure Durable Functions be a feasible approach?
Oct 29 2020 10:02 AM
@BobbyJ10 I work with a number of customers in the HPC space and Batch, CycleCloud (to facilitate getting HPC schedulers up and running) or VM Scale Sets as most HPC workloads need much more memory and CPU horsepower to complete jobs. I've honestly not ever seen a customer use a serverless path but if the CPU/memory requirements are lightweight I suppose it would be possible.
Oct 29 2020 10:14 AM
Thank you for the advice and standard approaches @CloudyRyan, I'll be sure to check how problem size affects performance when I look into it the Azure Function use case.
Oct 29 2020 10:40 AM
@BobbyJ10 There are a couple ways to approach this. If the simulations can run as an Azure Function, Durable Functions can be a good way to orchestrate them as activity functions. There are some settings that you might need to change to make a CPU-bound workload scale properly.
We've also seen customers use Durable Functions to orchestrate workloads running in Batch or Container Instances. That's a good route to go if you need to coordinate compute to do large batches of work.
Oct 29 2020 10:45 AM
Thanks @Anthony Chu, it's really great that you can chain these services together in this manner. Will keep all of this is mind when I start taking a look into it.