If you’re not already running your electronic design automation (EDA) backend workflows in the cloud, you’ve no doubt been seriously exploring options that would let you take advantage of the scale of cloud computing. One large hurdle for most cloud infrastructure architects has been getting tools, libraries, and datasets in to Azure from on-premises storage. The magnitude and complexity of data movement and other issues—such as securing your intellectual property (IP)—are challenging to many teams that could otherwise benefit from running EDA verification workloads on Azure high-performance computing (HPC) resources.
With the availability of the Azure HPC Cache service, and orchestration tools like Terraform from Hashicorp, it’s actually much easier than you may think to safely present your tools, libraries, IP design, and other data to large Azure HPC compute farms managed by Azure Cycle Cloud.
Use Azure HPC Cache for high throughput, low latency access to on-premises NAS
Azure HPC Cache is the pivotal service that brings together the high performance and low latency file access experience needed to be able to leverage Azure for efficiently running EDA backend tools that demand high processor clock rates and present considerable file I/O overhead. The intelligent-caching file service enables a hybrid-infrastructure environment, meaning that you can burst your EDA backend compute grid into Azure with reliably high-speed access to your on-premises network-attached storage (NAS) data. Azure HPC Cache eliminates the need to explicitly copy your tools, libraries, IP, and other proprietary information, from on-prem storage into the cloud or to permanently store it there. The cached copy can be destroyed and/or re-cached at essentially the press of the button.
Use Terraform to orchestrate infrastructure provisioning and management
The availability of new cloud orchestration tools like Hashicorp’s Terraform make it much simpler to deploy cloud resources. These tools work by letting you express your infrastructure as code (IaC) so that IT and cloud architects are do not have to continually reinvent the wheel to set up new or expanding cloud environments.
Hashicorp Terraform is one of the most common IaC tools—you may even already be using it to manage your on-premises infrastructure. It’s an open-source tool (downloadable from Hashicorp) that codifies infrastructure in modular configuration files that describe the topology of cloud resources. You can use it to simultaneously provision and manage both on-premises and cloud infrastructure, including Azure HPC Cache service instances. By helping you quickly, reliably, and precisely scale infrastructure up and down, IaC automation helps ensure that you’ll pay for resources only when you actually need and use them.
Use Azure CycleCloud for compute scalability, pay-for-only-what-you-use elasticity, and because it’s easier than you thought
The obvious advantages to using Azure include unlimited access to virtual machines (VMs) and CPU cores that let you scale out backend EDA workflow jobs as fast and as parallel as possible. Azure CycleCloud can connect to IBM LSF to manage the compute farm deployment in unison with your LSF job scheduling. The HPC Cache component lets you maintain control of your data while providing low-latency storage access to keep cloud computing CPU cores busily productive. With high-speed access to all required data, your queued-up verification jobs will run continuously, and you’ll minimize paying for idle cloud resources.
Summarizing the benefits, Azure HPC Cache and automation tools like Terraform let you more easily take advantage of cloud compute by:
Providing high-speed access to on-premises data with scalability to support tens of thousands of cloud-based compute cores
Minimizing complex data movement processes
Streamlining provisioning and management for rapid deployment and to maintain the familiarity of existing workflows and processes—push-button deployment means electronic design architects don’t have to change what they’re used to doing
Ensuring proprietary data does not linger in the cloud afer compute tasks are finished
Azure solutions truly democratize technology, making massive, on-demand compute resources accessible to EDA businesses of every size, from small designers to mid-size semiconductor manufacturers and the largest chip foundries. With access to tens of thousands of compute cores in Azure, companies can meet surge compute demand with the elasticity they need.
Use validated architectures
Lest you’re concerned about blazing new territory, be assured that EDA customers are already running many popular tool suites in Azure. And, they’ve validated those workloads against the HPC Cache. To give you an idea of results, for workloads running such tools as Ansys RedHawk-SC, Cadence Xcelium, Mentor Calibre DRC, Synopsys PrimeTime, and Synopsys TetraMAX, users report behavior and performance similar to or better than running the same backend workloads on their own datacenter infrastructure. Our Azure and HPC Cache teams also routinely work with EDA software tools vendors to optimize design solutions for Azure deployment.
Build on reference architectures
The reference architectures below illustrate how you can use HPC Cache for EDA workloads in both cloud-bursting (hybrid) and fully-on-Azure architectures.
Figure 1. A general reference architecture for fully managed caching of on-premises NAS with the Azure HPC Cache service to burst verification workloads to Azure compute.
Figure 2. A general reference architecture with on-premises NAS and HPC Cache optimizing read access to tools and libraries hosted on-premises, and project data hosted on ANF.
Access additional resources
Follow the links below for additional information about Azure HPC Cache and related tools and services.