Using SQL Spark to troubleshooting SQL On Demand conversion overflow
Published Aug 28 2020 10:15 AM 1,491 Views

This case started as an issue on SQL on-demand while reading a parquet file. The error was conversion overflows.

 Error: Conversion overflows.

So my objective on this post is to describe a possible workaround to read this file in the Spark as also to give some ideas on how to troubleshoot this with the notebook.

Basically I loaded the file on Spark using Scala and it completed successfully:



val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// Create a DataFrame 
val df ="abfss://<storage>")
Display (df.limit(10))



No big deal here. After that, I wanted to check the metadata:



// Print the schema in a tree format



You should have a result like the following:



More information you can find here:


Once I found the metadata I used the column name information to run the query against SQL on-demand. With this information, I found the issue was related to a decimal column.

Here some SQL On-demand query tips for conversion issues:

1) You can use schema inferring while querying and by using it you can convert the column datatype. For example, I will convert the decimal column named as column_dec_converted to varchar.



        BULK 'https://<storage>',
        FORMAT='PARQUET'  )     WITH ( column_dec_converted varchar(100) ) as rows


2) You can actually apply the convert on the Select Clause as you would do on standard SQL Server:

    SELECT cast (column_dec_converted as varchar(100))                     
        BULK https://<storage>',
        FORMAT='PARQUET'  )    as Result
SQL On Demand does not support DML but it fully Supports Select syntax.
Another option would be:  You can also convert the file that you read in parquet into a SQL DW table. The steps are here:

That is it!

Liliam C Leme

UK Engineer

1 Comment
Version history
Last update:
‎Aug 28 2020 10:23 AM
Updated by: