Forum Widgets
Latest Discussions
ADF unable to ingest partitioned Delta data from Azure Synapse Link (Dataverse/FnO)
We are ingesting Dynamics 365 Finance & Operations (FnO) data into ADLS Gen2 using Azure Synapse Link for Dataverse, and then attempting to load that data into Azure SQL Database using Azure Data Factory (ADF). This is part of a migration effort as Export to Data Lake is being deprecated. Source Details Source: ADLS Gen2 Data generated by: Azure Synapse Link for Dataverse (FnO) Format on lake: Delta / Parquet Partitioned folder structure (e.g. PartitionId=xxxx) Destination: Azure SQL Database Issue Observed in ADF When configuring ADF pipelines: Using ADLS Gen2 dataset with: Delta / Parquet Recursive folder traversal Wildcard paths We encounter: No data returned in Data Preview Or runtime error such as: “No partitions information found in metadata file” Despite this: The data is present in ADLS The same data can be successfully queried using Synapse serverless SQL Key Question for ADF / Synapse Engineers What is the recommended and supported ADF ingestion pattern for: Partitioned Delta/Parquet data produced by Azure Synapse Link for Dataverse Specifically: Should ADF: Read Delta tables directly, or Use Synapse serverless SQL external tables/views as an intermediate layer? Is there a reference architecture for: Synapse Link → ADLS → ADF → Azure SQL Are there ADF limitations when consuming Synapse Link–generated Delta tables? Many customers are now forced to migrate due to Export to Data Lake deprecation, but current ADF documentation does not clearly explain how to replace existing ingestion pipelines when using Synapse Link for FnO. Any guidance, patterns, or official documentation would be greatly appreciated.RaheelislamJan 09, 2026Copper Contributor10Views0likes0CommentsUser Properties of Activities in ADF: How to add dynamic content in it?
On ADF, I am using a for each loop in which I am using an Execute Pipeline Activity which is getting executed for different iterations as per the values of the items provided to the For-Each Loop. I am stuck on a scenario which requires me to add the Dynamic Content Expression in the User Properties of individual activities of ADF. Specific to my case, I want to add the Dynamic Content Expression in the User Properties of Execute Pipeline Activity so that I get to individual runs of these activities on Azure Monitor with a specific label attached to it through its User Properties. The necessity to add the Dynamic Content Expression in the User Properties is due to the reason that each execution in respective iterations of these activities corresponds to a particular Step from a set of Steps configured for the Data Load Job as a whole, which has been orchestrated through ADF. To identify the association with the respective Job-Step, I require to add Dynamic Content Expression in its User Properties. Any sort of response regarding this is highly appreciated. Thank You!manujDec 28, 2025Copper Contributor62Views1like1CommentCopy Data Activity Failed with Unreasonable Cause
It is a simple set up but it has baffled me a lot. I'd like to copy data to a data lake via API. Here are the steps I've taken: Created a HTTP linked service as below: Created a dataset with a HTTP Binary data format as below: Created a pipeline with a Copy Data activity only as shown below: Made sure linked service and dataset all working fine as below: Created a Sink dataset with 3 parameters as shown below: Passed parameters from pipeline to Sink dataset as below: That's all. Simple, right? But the pipeline failed with a clear message "usually this is caused by invalid credentials." as below: Summary: No need to worry about the Sink side of parameters etc. which I have used same thing for years on other pipelines and all succeeded. This time the API failed to reach a data lake from source side as said "invalid credentials". In Step 4 above one could see the linked service and dataset connections were succeeded, ie. credentials have been checked and passed already. How come it failed in data copy activity complaining an invalid credentials? Pretty weird. Any advice and suggestions will be welcomed.AJ81Dec 03, 2025Copper Contributor41Views0likes0CommentsDataflow snowflake connection issue
I'm trying to set up a sink to snowflake in dataflow, but when I test the connection it doesn't work It just comes out JDBC driver communication error, tried searching online and looking at documentation link but couldn't find anything about this issue. But this same dataset works fine outside of the dataflow, I can preview the data in same dataset: There seems to be issue with dataflow. Even when I try to execute the dataflow through pipeline the same error message comes up: Does anyone know how to solve this problem with dataflows? Also with the sink settings option if I select recreate table will it create the table in snowflake if it doesn't already exist? Trying to find an easy way to copy a lot of tables into snowflake without explicitly having to create the table first especially if the metadata is only known at runtime, the pipeline copy job doesn't work as the table has to exist before it can insert data into it, but dataflow seems promising if the connection actually works.MangoMagicNov 13, 2025Copper Contributor356Views0likes3CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.SolvedadaardorNov 01, 2025Copper Contributor7.5KViews3likes16CommentsData flow sink to Blob storage not writing to subfolder
Hi Everybody This seems like it should be straightforward, but it just doesn't seem to work... I have a file containing JSON data, one document per line, with many different types of data. Each type is identified by a field named "OBJ", which tells me what kind of data it contains. I want to split this file into separate files in Blob storage for each object type prior to doing some downstream processing. So, I have a very simple data flow - a source which loads the whole file, and a sink which writes the data back to separate files. In the sink settings, I've set the "File name option" setting to "Name file as column data" and selected my OBJ column for the "Column Data", and this basically works - it writes out a separate file for each OBJ value, containing the right data. So far, so good. However, what doesn't seem to work is the very simplest thing - I want to write the output files to a folder in my Blob storage container, but the sink seems to completely ignore the "Folder path" setting and just writes them into the root of the container. I can write my output files to a different container, but not to a subfolder inside the same container. It even creates the folder if it's not there already, but doesn't use it. Am I missing something obvious, or does the "Folder path" setting just not work when naming files from column data? Is there a way around this?DuncanKingOct 14, 2025Copper Contributor45Views0likes0CommentsProblem with Linked Service to SQL Managed Instance
Hi I'm trying to create a linked Service to a SQL Managed Instance. The Managed Instance is configured with a Vnet_local endpoint If I try to connect with an autoresolve IR or a SHIR I get the following error The value of the property '' is invalid: 'The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net''. The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net' Is there a way to connect to it without resorting to a private endpoint? Cheers Alexalexp01482Sep 26, 2025Copper Contributor75Views0likes0CommentsADF connection issue with Cassandra
Hi, I am trying to connect a cassandra DB hosted in azure cosmos db. I created the linked service but getting below error on test connection. Already checked the cassandra DB and its public network access is set to all networks. Google suggested enabling SSL but there is no such option in linked service. Please help. Failed to connect to the connector. Error code: 'Unknown', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to the connector. Error code: 'InternalError', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to Cassandra server due to, ErrorCode: InternalError All hosts tried for query failed (tried 51.107.58.67:10350: SocketException 'A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied')panksumeSep 20, 2025Copper Contributor146Views1like1CommentHelp with Partial MongoDB Update via Azure Data Factory Data Flow
Hello, everyone! I have a complex question about how to perform a partial update on a MongoDB collection using Data Flow in Azure Data Factory. My goal is to modify only some nested fields without overwriting the entire document. My flow reads JSON files with the following structure: { "_id": { "$oid": "1xp3232to" }, "root_field": "root_value", "main_array": [ { "array_id": "id001", "status": "PENDING", "nested_array": [] } ], "numeric_value": { "$numberDecimal": "10.99" } } I need Data Flow to make two changes in a single run: Change the status field from "PENDING" to "SENT". Add a new object to the nested_array with the following data: event: "SENT" description: "FILE GENERATED" timestamp: (current date and time) system: "Sis Test" I've tried some expressions with update and append in the Derived Column transformation, but I can't get the syntax right to make both changes at the same time. My biggest concern is with the MongoDB Sink: how to configure it so that Data Flow performs a partial update and doesn't overwrite the entire document, losing root_field, numeric_value, etc.? My questions are: What is the correct expression for the Derived Column that makes these two nested modifications in a single step? How should I configure the MongoDB Sink to ensure the update is partial, using _id as the key? I really appreciate the community's help!leopoldinoexSep 05, 2025Copper Contributor207Views0likes1CommentUser configuration issue
Hi, I am getting below error "Execution fail against sql server. Please contact SQL Server team if you need further support. Sql error number: 3930. Error Message: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" I never faced this kind of error. Could anyone please let me know what i can do and i needs to do. I am beginner, please explain me. Thanks.Aviavi-123Aug 21, 2025Copper Contributor140Views1like1Comment
Resources
Tags
- azure data factory177 Topics
- Azure ETL47 Topics
- Copy Activity41 Topics
- Azure Data Integration40 Topics
- Mapping Data Flows28 Topics
- Azure Integration Runtime25 Topics
- ADF5 Topics
- azure data factory v23 Topics
- Data Flows3 Topics
- pipeline3 Topics