Table of Contents
How Azure NetApp Files cache volumes work
Azure NetApp Files cache volume is EDA workflow ready
Introduction
Electronic Design Automation (EDA) is the foundation of modern semiconductor innovation, enabling engineers to design, simulate, and validate increasingly sophisticated chip architectures. As designs push the boundaries of PPA (Power, Performance, and reduced Area) to meet escalating market demands, the volume of associated design data has surged exponentially with a single System-on-Chip (SoC) project generating multiple petabytes of data during its development lifecycle, making data mobility and accessibility critical bottlenecks. To overcome these challenges, Azure NetApp Files (ANF) cache volumes are purpose-built to optimize data movement and minimize latency, delivering high-speed access to massive design datasets across distributed environments. By mitigating data gravity, Azure NetApp Files cache volumes empower chip designers to leverage cloud-scale compute resources on demand and at scale, thus accelerating innovation without being constrained by physical infrastructure.
Co-authors:
- Andy Chan, Principal Product Manager – Azure NetApp Files HPC/EDA
- Ranga Sankar, Azure NetApp Files Technical Marketing Engineer
- Chad Morgenstern, Director of Engineering (Performance), NetApp
How Azure NetApp Files cache volumes work
EDA hybrid and burst workflows simplified.
Azure NetApp Files cache volumes, built on proven NetApp FlexCache® technology, accelerate data access across distributed environments and from on-premises to Azure.
By caching hot datasets on first read, they provide low-latency access to frequently used data in remote locations—without duplicating entire volumes. This reduces bandwidth consumption and significantly lowers Azure infrastructure costs associated with data ingestion.
Azure NetApp Files cache volumes deliver a range of key advantages that empower EDA teams to maximize performance, efficiency, and collaboration in cloud-driven workflows:
- LAN level performance: Cache frequently accessed files into Azure. Engineers in different geographies can access cached data within Azure with minimal latency to optimize their EDA simulation run time by peering localized data cached into Azure NetApp Files cache volumes and compute together in Azure.
- Reduced WAN traffic: Only required data is synchronized, dramatically cutting ingress bandwidth usage. Maintain consistency through intelligent file locking mechanisms.
- Seamless Azure integration: Azure NetApp Files cache volumes integrates cloud bursting without disrupting existing NFS-based EDA workflows.
- Improved collaboration: Azure NetApp Files cache volumes improve data concurrency by design, allowing multiple caches to fan out from a single origin and deliver real-time, contention-free access to shared data across geographies.
- Lower TCO: By reducing bandwidth consumption, minimizing infrastructure overhead, and eliminating the need for duplicate storage, Azure NetApp Files cache volumes lower total cost of ownership for globally distributed workloads.
Azure NetApp Files cache volume is EDA workflow ready
The first major challenge in running EDA jobs in the cloud is identifying the files, tools, libraries, design IP and PDK that are needed for the flow and then figuring out the way to move that data into the cloud. This often becomes a time-consuming, trial-and-error process riddled with “file or directory not found” errors. Engineers must resolve missing dependencies one by one, often without full visibility into what the flow requires until it breaks.
The second challenge is keeping disparate silos aligned across work geographies. For example, teams in Bangalore, Austin, Tel Aviv, and Toronto often work concurrently on the same design data—but with separate storage domains and inconsistent sync practices. Without disciplined syncing the selective, intentional synchronization of only the required files, the teams face three core risks: drift, where datasets diverge and break workflows; syncing overhead, where unnecessary data transfer wastes time and bandwidth; and loss of integrity, where files are corrupted, misaligned, or modified out of band, undermining trust in the design data.
Azure NetApp Files cache volumes address these challenges, eliminating the need to copy the entire dataset into Azure. To create a caching pair from on-premise ONTAP, first select the data volumes required by the design flow, then create a cache volume in Azure NetApp Files that establishes a secure relationship with the on-prem origin volume. Once created and mounted, users and design flows gain instant read/write access to a complete, up-to-date cache from the origin volume.
While EDA workloads generate massive data footprints, the data is inherently structured in a way that makes it cache volume ready. In a typical EDA flow below, tools, libraries, and PDKs are stored in volumes predominantly accessed in a read-heavy manner. These read-intensive volumes are ideal targets pairing with cache volumes. This will allow organizations to fully leverage its performance benefits listed above.
Because Azure NetApp Files cache volumes are built on NetApp ONTAP features and tools, adoption is straightforward. The origin volume can continue using NetApp Snapshot copies or SnapMirror replication, and user access controls (ACLs/permissions) are automatically preserved on the cache. The setup process leverages ONTAP’s built-in peering mechanism, ensuring it relies on proven technology rather than introducing a new syncing method. Having been trusted for years in on-premises EDA environments, ONTAP FlexCache now extends that same reliability and performance to Azure.
Conclusion
Azure NetApp Files cache volumes now offer support in enabling global collaboration across EDA teams by addressing the challenges of data gravity, specifically the movement of massive datasets with minimal latency and reduced operational overhead. By eliminating redundant infrastructure and manual data replication, and by intelligently caching frequently accessed data, this solution significantly reduces network strain and associated costs. The result is an architecture that not only accelerates design cycles and enhances productivity, but also provides a scalable, cost-effective foundation for modern semiconductor innovation in the Azure cloud.
Next steps
Sign up for Azure NetApp Files cache volume which is now in Public Preview:
- 1TiB is the minimum pool size and cache volumes can share pools with other regular and large volumes.
- 50GiB is the smallest cache volume size. 1PiB is the maximum cache volume size.
Several leading EDA companies and independent software vendors (ISVs) are already leveraging Azure NetApp Files cache volumes to accelerate their design workflows in the Azure cloud. By doing so, they’re reaping the benefits of simplified data management, enhanced performance, and seamless access to large-scale design datasets.