Forum Discussion
Error in copy activity with Oracel 2.0
- May 19, 2025
I need to add to this that the solution to my situation was to add the following line to the Parquet dataset json code:
"useParquetV2": true
at the same level as compressionCodec
in combination with adaardor suggestion with "Support V1 data types" above. Without it the Parquet file could not be read by Databricks with the error that decimal precision of 256 is not allowed, max 38.
Full code for Parquet dataset:
{
"name": "Lake_PARQUET_folder",
"properties": {
"linkedServiceName": {
"referenceName": "DataLake",
"type": "LinkedServiceReference"
},
"parameters": {
"source": {
"type": "string"
},
"namespace": {
"type": "string"
},
"entity": {
"type": "string"
},
"partition": {
"type": "string"
},
"container": {
"type": "string",
"defaultValue": "raw"
}
},
"folder": {
"name": "Data Lakes"
},
"annotations": [],
"type": "Parquet",
"typeProperties": {
"location": {
"type": "AzureBlobFSLocation",
"folderPath": {
"value": "@concat(dataset().source, '/', dataset().namespace, '/', dataset().entity, '/', dataset().partition)",
"type": "Expression"
},
"fileSystem": {
"value": "@dataset().container",
"type": "Expression"
}
},
"compressionCodec": "gzip",
"useParquetV2": true
},
"schema": []
},
"type": "Microsoft.DataFactory/factories/datasets"
}
In your linked service connection settings, have you tried the additional connection properties to support the v1 data types?
- martin_larsson_ellevioMay 13, 2025Brass Contributor
Now I have tried it and it works!! Thank you
- martin_larsson_ellevioMay 19, 2025Brass Contributor
I need to add to this that the solution to my situation was to add the following line to the Parquet dataset json code:
"useParquetV2": true
at the same level as compressionCodec
in combination with adaardor suggestion with "Support V1 data types" above. Without it the Parquet file could not be read by Databricks with the error that decimal precision of 256 is not allowed, max 38.
Full code for Parquet dataset:
{
"name": "Lake_PARQUET_folder",
"properties": {
"linkedServiceName": {
"referenceName": "DataLake",
"type": "LinkedServiceReference"
},
"parameters": {
"source": {
"type": "string"
},
"namespace": {
"type": "string"
},
"entity": {
"type": "string"
},
"partition": {
"type": "string"
},
"container": {
"type": "string",
"defaultValue": "raw"
}
},
"folder": {
"name": "Data Lakes"
},
"annotations": [],
"type": "Parquet",
"typeProperties": {
"location": {
"type": "AzureBlobFSLocation",
"folderPath": {
"value": "@concat(dataset().source, '/', dataset().namespace, '/', dataset().entity, '/', dataset().partition)",
"type": "Expression"
},
"fileSystem": {
"value": "@dataset().container",
"type": "Expression"
}
},
"compressionCodec": "gzip",
"useParquetV2": true
},
"schema": []
},
"type": "Microsoft.DataFactory/factories/datasets"
}
- Ace_Is_On_The_CaseJan 27, 2026Copper Contributor
I don't see this parameter ("useParquetV2": true) being mentioned in MS ADF documentation.
https://learn.microsoft.com/en-us/azure/data-factory/format-parquetHow did you get to it ? Where is it mentioned ? ....and ...what does it do ?
--------
BTW, worked for me - great ! Thanks !
However, the property for Oracle connector is "supportV1DataTypes" (without spaces that are seen in the screenshot)
can be found in adf documentation: https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#linked-service-properties