Forum Discussion
Error in copy activity with Oracel 2.0
- May 19, 2025
I need to add to this that the solution to my situation was to add the following line to the Parquet dataset json code:
"useParquetV2": true
at the same level as compressionCodec
in combination with adaardor suggestion with "Support V1 data types" above. Without it the Parquet file could not be read by Databricks with the error that decimal precision of 256 is not allowed, max 38.
Full code for Parquet dataset:
{
"name": "Lake_PARQUET_folder",
"properties": {
"linkedServiceName": {
"referenceName": "DataLake",
"type": "LinkedServiceReference"
},
"parameters": {
"source": {
"type": "string"
},
"namespace": {
"type": "string"
},
"entity": {
"type": "string"
},
"partition": {
"type": "string"
},
"container": {
"type": "string",
"defaultValue": "raw"
}
},
"folder": {
"name": "Data Lakes"
},
"annotations": [],
"type": "Parquet",
"typeProperties": {
"location": {
"type": "AzureBlobFSLocation",
"folderPath": {
"value": "@concat(dataset().source, '/', dataset().namespace, '/', dataset().entity, '/', dataset().partition)",
"type": "Expression"
},
"fileSystem": {
"value": "@dataset().container",
"type": "Expression"
}
},
"compressionCodec": "gzip",
"useParquetV2": true
},
"schema": []
},
"type": "Microsoft.DataFactory/factories/datasets"
}
Hi , Can you please precisely show where did you keep the setting "Support V1 data types" as mentioned above ?
It is shown in the reply from adaardor​ in this thread. Click the "Show entire thread" or something similar to view all replies.