Use Spark (Scala) to write data from ADLS to Synapse Dedicated Pool
Published Mar 15 2021 06:09 AM 4,723 Views
Microsoft

 

In this article, I would be talking about how can we write data from ADLS to Azure Synapse dedicated pool using AAD . We will be looking at direct sample code that can help us achieve that.

 

1. First step would be to import the libraries for Synapse connector. This is an optional statement.

 

  Mukund_Bhashkar_0-1615791313561.png

2. Next step is to initialize variable to create/read data frames

   Mukund_Bhashkar_1-1615791313568.png

Note : Above step can also be written in below format :

 

//val df = spark.read.csv("abfss://synapse@mukund.dfs.core.windows.net/100SalesRecords.csv")

 

3. Next step would be to use the write api in below format :

 

Mukund_Bhashkar_3-1615791313576.png

Execute the cell and you will be able to see the new table with data popped up:

Mukund_Bhashkar_4-1615791484065.png

Observation in Driver log with this exercise:

 

Mukund_Bhashkar_5-1615791509190.png

We find external data source, file format and external table created as well as dropped during this automated process.

 

More information about other options for dedicated pool and server less related read/write API's in SPARK can be found out on this page.

 

2 Comments
Co-Authors
Version history
Last update:
‎Sep 15 2021 12:03 PM
Updated by: