Forum Widgets
Latest Discussions
Oracle 2.0 property authenticationType is not specified
I just published upgrade to Oracle 2.0 connector (linked service) and all my pipelines ran OK in dev. This morning I woke up to lots of red pipelines that ran during the night. I get the following error message: ErrorCode=OracleConnectionOpenError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message= Failed to open the Oracle database connection.,Source=Microsoft.DataTransfer.Connectors.OracleV2Core,''Type=System.ArgumentException, Message=The required property is not specified. Parameter name: authenticationType,Source=Microsoft.Azure.Data.Governance.Plugins.Core,' Here is the code for my Oracle linked service: { "name": "Oracle", "properties": { "parameters": { "host": { "type": "string" }, "port": { "type": "string", "defaultValue": "1521" }, "service_name": { "type": "string" }, "username": { "type": "string" }, "password_secret_name": { "type": "string" } }, "annotations": [], "type": "Oracle", "version": "2.0", "typeProperties": { "server": "@{linkedService().host}:@{linkedService().port}/@{linkedService().service_name}", "authenticationType": "Basic", "username": "@{linkedService().username}", "password": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "Keyvault", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().password_secret_name", "type": "Expression" } }, "supportV1DataTypes": true }, "connectVia": { "referenceName": "leap-prod-onprem-ir-001", "type": "IntegrationRuntimeReference" } } } As you can see "authenticationType" is defined but my guess is that the publish and deployment step somehow drops that property. We are using "modern" https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-delivery-improvements. Would appreciate some help with this!Solvedmartin_larsson_ellevioMay 22, 2025Brass Contributor461Views1like6CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.SolvedadaardorMay 13, 2025Copper Contributor6.3KViews3likes15CommentsError in copy activity with Oracel 2.0
I am trying to migrate our copy activities to Oracle connector version 2.0. The destination is parquet in Azure Storage account which works with Oracle 1.0 connecter. Just switching to 2.0 on the linked service and adjusting the connection string (server) is straight forward and a "test connection" is successful. But in a pipeline with a copy activity using the linked service I get the following error message on some tables. ErrorCode=ParquetJavaInvocationException,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message: java.lang.ArrayIndexOutOfBoundsException:255 total entry:1 com.microsoft.datatransfer.bridge.parquet.ParquetWriterBuilderBridge.addDecimalColumn(ParquetWriterBuilderBridge.java:107) .,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,' As the error suggests in is unable to convert a decimal value from Oracle to Parquet. To me it looks like a bug in the new connector. Has anybody seen this before and have found a solution? The 1.0 connector is apparently being deprecated in the coming weeks. Here is the code for the copy activity: { "name": "Copy", "type": "Copy", "dependsOn": [], "policy": { "timeout": "1.00:00:00", "retry": 2, "retryIntervalInSeconds": 60, "secureOutput": false, "secureInput": false }, "userProperties": [ { "name": "Source", "value": "@{pipeline().parameters.schema}.@{pipeline().parameters.table}" }, { "name": "Destination", "value": "raw/@{concat(pipeline().parameters.source, '/', pipeline().parameters.schema, '/', pipeline().parameters.table, '/', formatDateTime(pipeline().TriggerTime, 'yyyy/MM/dd'))}/" } ], "typeProperties": { "source": { "type": "OracleSource", "oracleReaderQuery": { "value": "SELECT @{coalesce(pipeline().parameters.columns, '*')}\nFROM \"@{pipeline().parameters.schema}\".\"@{pipeline().parameters.table}\"\n@{if(variables('incremental'), variables('where_clause'), '')}\n@{if(equals(pipeline().globalParameters.ENV, 'dev'),\n'FETCH FIRST 1000 ROWS ONLY'\n,''\n)}", "type": "Expression" }, "partitionOption": "None", "convertDecimalToInteger": true, "queryTimeout": "02:00:00" }, "sink": { "type": "ParquetSink", "storeSettings": { "type": "AzureBlobFSWriteSettings" }, "formatSettings": { "type": "ParquetWriteSettings", "maxRowsPerFile": 1000000, "fileNamePrefix": { "value": "@variables('file_name_prefix')", "type": "Expression" } } }, "enableStaging": false, "translator": { "type": "TabularTranslator", "typeConversion": true, "typeConversionSettings": { "allowDataTruncation": true, "treatBooleanAsNumber": false } } }, "inputs": [ { "referenceName": "Oracle", "type": "DatasetReference", "parameters": { "host": { "value": "@pipeline().parameters.host", "type": "Expression" }, "port": { "value": "@pipeline().parameters.port", "type": "Expression" }, "service_name": { "value": "@pipeline().parameters.service_name", "type": "Expression" }, "username": { "value": "@pipeline().parameters.username", "type": "Expression" }, "password_secret_name": { "value": "@pipeline().parameters.password_secret_name", "type": "Expression" }, "schema": { "value": "@pipeline().parameters.schema", "type": "Expression" }, "table": { "value": "@pipeline().parameters.table", "type": "Expression" } } } ], "outputs": [ { "referenceName": "Lake_PARQUET_folder", "type": "DatasetReference", "parameters": { "source": { "value": "@pipeline().parameters.source", "type": "Expression" }, "namespace": { "value": "@pipeline().parameters.schema", "type": "Expression" }, "entity": { "value": "@variables('sink_table_name')", "type": "Expression" }, "partition": { "value": "@formatDateTime(pipeline().TriggerTime, 'yyyy/MM/dd')", "type": "Expression" }, "container": { "value": "@variables('container')", "type": "Expression" } } } ] }Solvedmartin_larsson_ellevioMay 12, 2025Brass Contributor1.4KViews0likes6CommentsADF Data Flow Fails with "Path does not resolve to any file" — Dynamic Parameters via Trigger
Hi guys, I'm running into an issue with my Azure Data Factory pipeline triggered by a Blob event. The trigger passes dynamic folderPath and fileName values into a parameterized dataset and mapping data flow. Everything works perfectly when I debug the pipeline manually or trigger the pipeline manually with the trigger and pass in the values for folderPath and fileName directly. However, when the pipeline is triggered automatically via the blob event, the data flow fails with the following error: Error Message: Job failed due to reason: at Source 'CSVsource': Path /financials/V02/Forecast/ForecastSampleV02.csv does not resolve to any file(s). Please make sure the file/folder exists and is not hidden. At the same time, please ensure special character is not included in file/folder name, for example, name starting with _ I've verified the blob file exists. The trigger fires correctly and passes parameters The path looks valid. The dataset is parameterized correctly with @dataset().folderPath and @dataset().fileName I've attached screenshots of: 🔵 00-Pipeline Trigger Configuration On Blob creation 🔵 01-Trigger Parameters 🔵 02-Pipeline Parameters 🔵 03-Data flow Parameters 🔵 04-Data flow Parameters without default value 🔵 05-Data flow CSVsource parameters 🔵 06-Data flow Source Dataset 🔵 07-Data flow Source dataset Parameters 🔵 08-Data flow Source Parameters 🔵 09-Parameters passed to the pipeline from the trigger 🔵 10-Data flow error message https://primeinnovativetechnologies-my.sharepoint.com/:b:/g/personal/john_primeinntech_com/EYoH5Sm_GaFGgvGAOEpbdXQB7QJFeXvbFmCbZiW85PwrNA?e=0yjeJR What could be causing the data flow to fail on file path resolution only when triggered, even though the exact same parameters succeed during manual debug runs? Could this be related to: Extra slashes or encoding in trigger output? Misuse of @dataset().folderPath and fileName in the dataset? Limitations in how blob trigger outputs are parsed? Any insights would be appreciated! Thank youSolvedJohnG_PITMay 01, 2025Copper Contributor167Views0likes1CommentAzure Devops and Data Factory
I have started a new job and taken over ADF. I know how to use Devops to integrate and deploy when everything is up and running. The problem is, it's all out of sync. I need to learn ADO/ADF as they work together so I can fix this. Any recommendations on where to start? Everything on YouTube is starting with a fresh environment which I'd be fine with. I'm not new to ADO, but I've never been the setup guy before. And I'm strong on ADO management, just using it. Here are some of the problems I have: A lot of work has been done directly in the DEV branch rather than creating feature branches. Setting up a pull request from DEV to PROD wants to pull everything. Even in-progress or abandoned code changes. Some changes were made in the PROD branch directly, so I'll need to pull those changes back to DEV. We have valid changes in both DEV and PROD. I'm having trouble cherry-picking. It only lets me select one commit, then says I need to use command-line. It doesn't tell me the error. I don't know what tool to use for the command line. I've tried using Visual Studio, and I can pull in the Data Factory code, but have all the same problems there. I'm not looking for an answer to the questions, but how to find the answer to these questions. Is this Data Factory, or should I be looking at Devops? I'm having no trouble managing the database code or Power BI in Devops, but I created that fresh. Thanks for any help!Solvedbcarlson_fMar 04, 2025Copper Contributor260Views0likes4CommentsDifferent pools for workers and driver - in ADF triggered ADB jobs
Hello All, Azure Databricks allows usage of separate compute pools for drivers and workers when you create a job via the native Databricks workflows. For customers using ADF as an orchestrator for ADB jobs, is there a way to achieve the same when invoking notebooks/jobs via ADF? The linked service configuration in ADF seems to allow only one instance pool. Appreciate any pointers. Thanks !Solved157Views0likes1CommentHow to save Azure Data Factory work (objects)?
Hi, I'm new to Azure Data Factory (ADF). I need to learn it in order to ingest external third-party data into our domain. I shall be using ADF Pipelines to retrieve the data and then load it into an Azure SQL Server database. I currently develop Power BI reports and write SQL scripts to feed the Power BI reporting. These reports and scripts are saved in a backed-up drive - so if anything disappears, I can always use the back-ups to install the work. The target SQL database scripts, the tables the ADF Pipelines will load to, will be backed-up following the same method. How do I save the ADF Pipelines work and any other ADF objects that I may create (I don't know what exactly will be created as I'm yet to develop anything in ADF)? I've read about this CI/CD process but I don't think it's applicable to me. We are not using multiple environments (i.e. Dev, Test, UAT, Prod). I am using a Production environment only. Each data source that needs to be imported will have it's own Pipeline, so breaking a Pipeline should not affect other Pipelines and that's why I feel a single environment is suffice. I am the only Developer working within the ADF and so I have no need to be able to collaborate with peers and promote joint coding ventures. Does the ADF back-up it's Data Factories by default? If they do, can I trust that should our instance of ADF be deleted then I can retrieve the backed-up version to reinstall or roll-back? Is the a process/software which saves the ADF objects so I can reinstall them if I need to (by the way, I'm not sure how to reinstall them so I'll have to learn that)? Thanks.SolvedAzureNewbie1Jun 06, 2024Brass Contributor1.6KViews0likes2CommentsComplex ADF transformation on Specific Rows to Columns
Hello Experts, I have a transformation that I have tried a few dataflow scenarios that do not yield the results needed. We have a file that extracts data like the below sample: COL1 COL2 Manufacturing 1-Jan-23 Baked Goods Lemon Cookies Raw Materials 40470 Factory Overheads 60705 Staff Costs 91057.5 Electricity 136586.25 I would like the output table to look like the below: COL1 COL2 NewCOL3 NewCOL4 NewCOL5 NewCOL6 Raw Materials 40470 Manufacturing 1-Jan-23 Baked Goods Lemon Cookies Factory Overheads 60705 Manufacturing 2-Jan-23 Baked Goods Lemon Cookies Staff Costs 91057.5 Manufacturing 3-Jan-23 Baked Goods Lemon Cookies Electricity 136586.25 Manufacturing 4-Jan-23 Baked Goods Lemon Cookies The transformation should take the values of the first 4 rows as new column values and remove any nulls or whitespaces. I have used UNPIVOT and LOOKUP transformations but they return the column name as the value and not the values in rows 1-4, so I know I am missing a step in the process. Any suggestions on the dataflow for this challenge?SolvedGemmaLoweMar 01, 2024Copper Contributor545Views0likes1CommentTrigger a job ondemand
Hello, I am very new to ADF and my principal background is SQLServer Agent. I wonder if there is a way in ADF to create a job that will be triggered on demand as in SQLServer Agent ? If possible, it will be nice if somebody can point me to the documentation or a sample to do so. RegardsSolvedMessan APETEJan 09, 2024Copper Contributor552Views0likes1CommentScheduling trigger hourly but only during the day
Hello, I'm new to this group but am hoping someone may have some words of advice for me. I'm building a web application that uses Power Automate and Azure Data Factory to do the hard work in getting all data from multiple sources into a single SQL database. It's all working quite well but I've just realised that my hourly triggers (part of the clients specification) are a bit wasteful as during the evening there's no new data coming from his operation. Essentially from 6:00pm until 6:00am the pipeline is just copying data that has not changed. Does anyone know if it's possible to schedule hourly runs of the pipeline between certain hours of the day? I've seen the daily schedule that allows me to pick the days, but that only seems to facilitate one run per day at a specified time. I'm looking for something a little more dynamic than that. Many thanks in advance!SolvedgrantbeverleyNov 19, 2023Copper Contributor787Views0likes2Comments
Resources
Tags
- azure data factory174 Topics
- Azure ETL46 Topics
- Copy Activity40 Topics
- Azure Data Integration39 Topics
- Mapping Data Flows28 Topics
- Azure Integration Runtime25 Topics
- ADF5 Topics
- azure data factory v23 Topics
- Data Flows3 Topics
- pipeline3 Topics