Home
%3CLINGO-SUB%20id%3D%22lingo-sub-878380%22%20slang%3D%22en-US%22%3EADF%20adds%20TTL%20to%20Azure%20IR%20to%20reduce%20Data%20Flow%20activity%20times%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-878380%22%20slang%3D%22en-US%22%3E%3CP%3EADF%20has%20added%20a%20TTL%20(time-to-live)%20option%20to%20the%20Azure%20Integration%20Runtime%20for%20Data%20Flow%20properties%20to%20reduce%20data%20flow%20activity%20times.%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F133726iE14F64469E138721%2Fimage-size%2Fmedium%3Fv%3D1.0%26amp%3Bpx%3D400%22%20alt%3D%22azureir2.png%22%20title%3D%22azureir2.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EThis%20setting%20is%20only%20used%20during%20ADF%20pipeline%20executions%20of%20Data%20Flow%20activities.%20Debug%20executions%20from%20pipelines%20and%20data%20preview%20debugging%20will%20continue%20to%20use%20the%20debug%20settings%20which%20has%20a%20preset%20TTL%20of%2060%20minutes.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIf%20you%20leave%20the%20TTL%20to%200%2C%20ADF%20will%20always%20spawn%20a%20new%20Spark%20cluster%20environment%20for%20every%20Data%20Flow%20activity%20that%20executes.%20This%20means%20that%20an%20Azure%20Databricks%20cluster%20is%20provisioned%20each%20time%20and%20takes%20about%205-7%20minutes%20to%20become%20available%20and%20execute%20your%20job.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHowever%2C%20if%20you%20set%20a%20TTL%2C%20ADF%20will%20maintain%20a%20pool%20of%20VMs%20which%20can%20be%20utilized%20to%20spin-up%20each%20subsequent%20data%20flow%20activity%20against%20that%20same%20Azure%20IR.%20This%20reduces%20the%20amount%20of%20time%20needed%20to%20start-up%20the%20environment%20before%20your%20job%20is%20executed.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EADF%20will%20maintain%20that%20pool%20for%20the%20TTL%20time%20after%20the%20last%20data%20flow%20pipeline%20activity%20executes.%20Note%20that%20this%20will%20extend%20your%20billing%20period%20for%20a%20data%20flow%20to%20the%20extended%20time%20of%20your%20TTL.%20However%2C%20your%20data%20flow%20job%20execution%20time%20will%20decrease%20because%20of%20the%20re-use%20of%20the%20VMs%20from%20the%20compute%20pool.%20The%20compute%20resources%20are%20not%20provisioned%20until%20your%20first%20data%20flow%20activity%20is%20executed%20using%20that%20Azure%20IR.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ERead%20more%20about%20the%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fdata-factory%2Fconcepts-integration-runtime%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20Integration%20Runtime%20here%3C%2FA%3E.%20And%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fdata-factory%2Fconcepts-data-flow-performance%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E%20is%20an%20ADF%20Data%20Flow%20performance%20guide%20to%20help%20you%20optimize%20your%20environment.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-878380%22%20slang%3D%22en-US%22%3E%3CP%3EADF%20has%20added%20a%20TTL%20(time-to-live)%20option%20to%20the%20Azure%20Integration%20Runtime%20for%20Data%20Flow%20properties%20to%20reduce%20data%20flow%20activity%20times.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-878380%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Data%20Factory%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

ADF has added a TTL (time-to-live) option to the Azure Integration Runtime for Data Flow properties to reduce data flow activity times.

azureir2.png

This setting is only used during ADF pipeline executions of Data Flow activities. Debug executions from pipelines and data preview debugging will continue to use the debug settings which has a preset TTL of 60 minutes.

 

If you leave the TTL to 0, ADF will always spawn a new Spark cluster environment for every Data Flow activity that executes. This means that an Azure Databricks cluster is provisioned each time and takes about 5-7 minutes to become available and execute your job.

 

However, if you set a TTL, ADF will maintain a pool of VMs which can be utilized to spin-up each subsequent data flow activity against that same Azure IR. This reduces the amount of time needed to start-up the environment before your job is executed.

 

ADF will maintain that pool for the TTL time after the last data flow pipeline activity executes. Note that this will extend your billing period for a data flow to the extended time of your TTL. However, your data flow job execution time will decrease because of the re-use of the VMs from the compute pool. The compute resources are not provisioned until your first data flow activity is executed using that Azure IR.

 

Read more about the Azure Integration Runtime here. And here is an ADF Data Flow performance guide to help you optimize your environment.