Read & write parquet files using Apache Spark in Azure Synapse Analytics
Published Jun 11 2020 04:45 AM 26.2K Views

Apache Spark in Azure Synapse Analytics enables you easily read and write parquet files placed on Azure storage. Apache Spark provides the following concepts that you can use to work with parquet files:


  • function that reads content of parquet file using PySpark
  • DataFrame.write.parquet function that writes content of data frame into a parquet file using PySpark
  • External table that enables you to select or insert data in parquet file(s) using Spark SQL.

In the following sections you will see how can you use these concepts to explore the content of files and write new data in the parquet file. As a prerequisite, you need to have:

- Azure storage account ( in the examples below) with a container (parquet in the examples below) where your Azure AD user has read/write permissions

- Azure Synapse workspace with created Apache Spark pool.

Writing parquet files 

PySpark enables you to create objects, load them into data frame and store them on Azure storage using data frames and DataFrame.write.parquet() function:


# Define content 
Employee = Row("firstName", "lastName", "email", "salary")

employee1 = Employee('Јован', 'Поповић', '', 100000)
employee2 = Employee('John', 'Doe', '', 120000 )
employee3 = Employee('', None, '', 160000 )
employee4 = Employee('Confucius', '孔子', 'confucius@contoso.cocom', 160000 )

employees = [employee1, employee2, employee3, employee4]
df = spark.createDataFrame(employees)



Note that this code will create a set of parquet files on Azure Storage.

Reading parquet files

Once you create a parquet file, you can read its content using function:


# read content of file
df ='abfss://')


The result of this query can be executed in Synapse Studio notebook.

Creating tables on parquet files

Apache Spark enables you to access your parquet files using table API. You can create external table on a set of parquet files using the following code:


LOCATION 'abfss://'


Once you have created your external table your can read the content of parquet files using Spark SQL language:


FROM employees


You can also insert new records into the parquet files using INSERT statement:


INSERT INTO employees 
VALUES ('Nikola', 'Tesla', 110000, '')



NOTE: Apache Spark don't enables you to update/delete records in parquet tables. You need to convert parquet to DeltaFormat if you want to update content of parquet files.


Spark SQL provides concepts like tables and SQL query language that can simplify your access code.



Apache Spark engine in Azure Synapse Analytics enables you to easily process your parquet files on Azure Storage. Lear more abut the capabilities of Apache spark engine in Azure Synapse Analytics in documentation.


Version history
Last update:
‎Jun 11 2020 05:40 AM
Updated by: