By adding pre-configured categories for Spark compute (Small, Medium, Large), the Azure Data Factory has made it super-easy to configure the Azure Integration Runtime for your Mapping Data Flows. To view the cluster configuration behind each category, click on the Advanced link. You will still have the ability to set your own configuration from the same compute and core options available today in ADF and Synapse by selecting the "Custom" option.
Note that these new integration runtime settings will land in ADF in the next 1-2 weeks.
Published Jul 20, 2022
Version 1.0Mark Kromer
Microsoft
Joined August 14, 2018
Azure Data Factory Blog
Follow this blog board to get notified when there's new activity