By adding pre-configured categories for Spark compute (Small, Medium, Large), the Azure Data Factory has made it super-easy to configure the Azure Integration Runtime for your Mapping Data Flows. To view the cluster configuration behind each category, click on the Advanced link. You will still have the ability to set your own configuration from the same compute and core options available today in ADF and Synapse by selecting the "Custom" option.
Note that these new integration runtime settings will land in ADF in the next 1-2 weeks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.