Today, we are introducing support for orchestrating Synapse notebooks and Synapse spark job definitions (SJD) natively from Azure Data Factory pipelines. It immensely helps customers who have invested in ADF and Synapse Spark without requiring to switch to Synapse Pipelines for orchestrating Synapse Notebooks and SJD.
NOTE: Synapse notebook and SJD activities were only available in Synapse Pipelines previously.
One of the critical benefits of Synapse notebooks is the ability to use Spark SQL and PySpark to perform data transformations. It allows you to use the best tool for the job, whether it be SQL for simple data cleaning tasks or PySpark for more complex data processing tasks.
1. Add Synapse Notebook activity into a Data Factory pipelines
2. Create a connection to Synapse workspace through a new compute Linked Service (Azure Synapse Analytics Artifact)
3. Choose an existing notebook to operationalize
Note: If you do not specify 'Spark pool', 'Executor size', etc., it will use the one specified in the notebook. These properties are optional and only provides you additional spark configurations to override these during the operational run.
5. Monitor the notebook run details by accessing the activity output, which contains "sparkApplicationStudioUrl" that takes you to Synapse Workspace for detailed run monitoring. Notebook "exitValue" is also accessible in the output and can be referenced in the down stream activities.
Documentation: Synapse Notebook activity in ADF
Documentation: Synapse SJD (Spark job definition) activity in ADF
Documentation: Azure Synapse Analytics (Artifact) Linked Service in ADF
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.