We are excited to announce the preview availability of Apache Spark™ 3.3 on Synapse Analytics. The essential changes include features which come from upgrading Apache Spark to version 3.3.1 and upgrading Delta Lake to version 2.1.0.
Check out the official release notes for Apache Spark 3.3.0 and Apache Spark 3.3.1 for the complete list of fixes and features. In addition, review the migration guidelines between Spark 3.2 and 3.3 to assess potential changes to your applications, jobs and notebooks.
Below is an extended summary of key new features that we are highlighting of importance in this article for you to check out in more details.
Related to Apache Spark version 3.3.0 and 3.3.1
Related to Delta Lake version 2.1.0
- Support for Apache Spark 3.3.
- Support for [TIMESTAMP | VERSION] AS OF in SQL. With Spark 3.3, Delta now supports time travel in SQL to query older data easily. With this update, time travel is now available both in SQL and through the DataFrame API.
- Support for Trigger.AvailableNow when streaming from a Delta table. Spark 3.3 introduces Trigger.AvailableNow for running streaming queries like Trigger.Once in multiple batches. This is now supported when using Delta tables as a streaming source.
- Support for SHOW COLUMNS to return the list of columns in a table.
- Support for DESCRIBE DETAIL in the Scala and Python DeltaTable API. Retrieve detailed information about a Delta table using the DeltaTable API and in SQL.
- Support for returning operation metrics from SQL Delete, Merge, and Update commands. Previously these SQL commands returned an empty DataFrame, now they return a DataFrame with useful metrics about the operation performed.
- Optimize performance improvements
- Added a config to use
repartition(1)
instead of coalesce(1)
in Optimize for better performance when compacting many small files.
- Improve Optimize performance by using a queue-based approach to parallelize the compaction jobs.
- Other notable changes
- Support for using variables in the VACUUM and OPTIMIZE SQL commands.
- Improvements for CONVERT TO DELTA with catalog tables.
- Autofill the partition schema from the catalog when it’s not provided.
- Use partition information from the catalog to find the data files to commit instead of doing a full directory scan. Instead of committing all data files in the table directory, only data files under the directories of active partitions will be committed.
- Support for Change Data Feed (CDF) batch reads on column mapping enabled tables when DROP COLUMN and RENAME COLUMN have not been used. See the documentation for more details.
- Improve Update performance by enabling schema pruning in the first pass.
- Fix for
DeltaTableBuilder
to preserve table property case of non-delta properties when setting properties.
- Fix for duplicate CDF row output for delete-when-matched merges with multiple matches.
- Fix for consistent timestamps in a MERGE command.
- Fix for incorrect operation metrics for DataFrame writes with a
replaceWhere
option.
- Fix for a bug in Merge that sometimes caused empty files to be committed to the table.
- Change in log4j properties file format. Apache Spark upgraded the log4j version from 1.x to 2.x which has a different format for the log4j file. Refer to the Spark upgrade notes.
To learn more and further details, review the full Delta Lake 2.1.0 release notes.
For detailed contents and lifecycle information on all runtimes, you can visit Apache Spark runtimes in Azure Synapse.
Please note, that the team is working on upgrading some majors components versions during the public preview, before annoucing GA. This upgrade will ensure that the latest versions of these components are available for use, and that any potential bugs or security issues are addressed.