Forum Widgets
Latest Discussions
Web activity failure due to Invoking endpoint failed with HttpStatusCode - 403 -- help?
Hi, I have an Azure Data Factory (ADF) instance that I am using to create a Pipeline to ingest external (cloud based) 3rd party data into my Azure SQL Server database. I am a novice with ADF and have only used it to ingest some external SQL data into my SQL database - it did work. The external source I'm attempting to extract from uses an OAuth 2.0 API and an API is something I've not used before. Using Postman (never used this software before this attempt), I have passed the external source's base_url, client_id, and client_secret, and in return successfully received an access token. This tells me that the base_url, client_id, and client_secret values I passed are correct and accepted by the target source/application. Feeling encourage to implement the same values into ADF, I first created a Linked Service which with a successful test connection returned - see below. This Linked Service uses the same values as the Postman entry which granted an access token. I then created a Pipeline with a Web activity object within it. The General and User Properties don't have any configuration, only the Settings tab does which shown below. Again, the URL, Client ID and Client Secret configured here are the same as those used in Postman (and the Linked Service). I execute the Web object and it returns with a failure - see below. The error states the endpoint refused the request (for an access token). Is this accurate as I was able to receive an access token via Postman when using the same credentials? I don't understand why via Postman I can received an access token but via ADF it errors. I'm wondering if I've completed the ADF parts incorrectly, or if there is more needed just to received an access token, or if it's something else? Are you able to advise what's taking place here? Thanks.AzureNewbie1Feb 25, 2026Brass Contributor40Views0likes0CommentsUnable to INSERT rec into Table.
I am using the ADF . Execute Pipeline activity. My PL is parametrized. My stage table have the below rec: network_org_name partner_name parent_partner_name CorPart_number pipeline_run_id activity_run_id Se Network Producing UB SA S Reine Company Ltd 1226133 ebf961f9-3afc-4abc-9ee0-111330c9f1eb 858c1b5c-a699-4e73-84d8-3a42cab115d2 WITH CodeSourceTableCTE AS ( SELECT DISTINCT p.[partner_number] ,n.[network_org_number] FROM [utl].[stg_partner_network_org] stg LEFT JOIN [utl].[partner] p1 ON stg.[parent_partner_name] = p1.[partner_name] INNER JOIN [utl].[partner] p ON stg.[partner_name] = p.[partner_name] AND ISNULL(p.[delete_ind], '') <> 'Y' AND ISNULL(p.[parent_partner_number], '') = ISNULL(p1.[partner_number], '') AND ISNULL(p1.[delete_ind], '') <> 'Y' AND ISNULL(stg.[corpart_number], '') = ISNULL(p.[external_id], '') INNER JOIN [utl].[network_org] n ON stg.[network_org_name] = n.[network_org_name] WHERE stg.[partner_name] IS NOT NULL ) MERGE [utl].[partner_network_org] T USING CodeSourceTableCTE S ON T.[partner_number] = S.[partner_number] AND T.[network_org_number] = S.[network_org_number] WHEN MATCHED AND (ISNULL(T.[delete_ind], '') <> 'N') THEN UPDATE SET T.[delete_ind] = 'N', T.[last_modified_date] = GETUTCDATE(), T.[last_modified_by] = '@{pipeline().parameters.Source_File_Name}' WHEN NOT MATCHED BY TARGET THEN INSERT ([partner_number], [network_org_number], [created_date], [created_by], [last_modified_date], [last_modified_by], [delete_ind]) VALUES (S.[partner_number], S.[network_org_number], GETUTCDATE(), '@{pipeline().parameters.Source_File_Name}', GETUTCDATE(), '@{pipeline().parameters.Source_File_Name}', 'N'); SELECT COUNT(*) FROM [utl].[partner_network_org] WHERE [created_by] = '@{pipeline().parameters.Source_File_Name}' But when I am trying to check using below select * from utl.partner_network_org where partner_number in (select partner_number from utl.partner where last_modified_by like '%corpart%') I don't see records any help is appreciatedDRN_SDEFeb 16, 2026Copper Contributor30Views0likes0CommentsAzure Data Factory - help needed to ingest data, that uses an API, into a SQL Server instance
Hi, I need to ingest 3rd-party data, that uses an API, into my Azure SQL Server instance (ASSi) using Azure Data Factory (ADF). I'm a report developer so this area is unfamiliar to me, although I have previously integrated external, on-premise SQL Server data into our ASSi using ADF so I do have some exposure to the tool (just not API connections). The 3rd-party data belongs to a company named 'iLevel' (in case this is relevant). iLevel have provided some API documentation which is targeted for an experienced data engineer that understands API connections. This documentation has just a few sections before the connection focused details end. I'll list them below: 1) It mentions to download 'Postman Collection' and mentions no more than this. I've never heard of Postman Collection and I don't know why it's needed. Through limited exposure online, I don't understand its purpose, certainly not in my scenario. 2) It has the title 'Access to the API' and then lists four URLs which are the endpoints (I don't know which to use and will need to ask iLevel of this but, I guess, any will do for testing purposes). 3) Authentication and Authorization a) Generate a 'client id' and 'client secret' by logging into iLevel and generating these values with some clicks of a button. I've successfully generated both these values. b) Obtain an Access Token - it'll be easier to screenshot the instructions for this (I've blanked part of the URL for confidentiality). These are all the instructions on connection to the 3rd-party data. Unfortuntely for me, my lack of experience in this area means these instrcutions don't help me. I don't believe I'm any closer to connecting to the 3rd-party source data. Taking into consideration the above instructions but choosing to try and error in ADF, a tool I'm a little bit more familiar with, I've performed the following steps: 1) Created a Linked Service. I understand the iLevel solution is in the cloud and therefore the 'AutoResolveIntegrationRuntime' option has been selected as the 'Connect via integration runtime' value. For the 'Base URL' I've entered one of the four URL endpoints that were listed in the documentation (again, I will need to confirm which endpoint to use). The 'Test Connection' returns a successful result but I think it means nothing because if I were to placed 'xxx' at the end of the Base URL and test the connection, it stills returns successful when I know the URL with the 'xxx' post-fix isn't legit. 2) Create an ADF Pipeline containing 'Web activity' and 'Set variable' objects. The only configuration under the Web activity is the 'Settings' pane which has: The 'Body' property has (the client id and client secret taken from the iLevel solution are included in the body but blanked out): If the Web activity is successful then the Pipeline's next object (the Set variable) should assign the access token to a variable - as I understand this is what the Web activity is doing: The 'Value' property has: This is as far as I've got into my efforts of this integration task because the Web activity object fails when executed. The error message does state it is to do with an invalid 'client id' or 'client secret' - see below: You may direct to focus on the incorrect client id or client secret, however I don't have any confidence that I understand how to configure ADF to obtain an access token, and I'm maybe missing something if I see no need for Postman Collection use. What is Postman Collection and do I need it for what I'm trying to achieve? If yes, can anyone provide training material that suits my need? Have I configured ADF correctly and it is indeed an issue with the client id or client server, or is the error message received just a by-product of an incorrect ADF configuration? Your help will be most appreciated. Many thanks.AzureNewbie1Jan 28, 2026Brass Contributor84Views0likes0CommentsADF unable to ingest partitioned Delta data from Azure Synapse Link (Dataverse/FnO)
We are ingesting Dynamics 365 Finance & Operations (FnO) data into ADLS Gen2 using Azure Synapse Link for Dataverse, and then attempting to load that data into Azure SQL Database using Azure Data Factory (ADF). This is part of a migration effort as Export to Data Lake is being deprecated. Source Details Source: ADLS Gen2 Data generated by: Azure Synapse Link for Dataverse (FnO) Format on lake: Delta / Parquet Partitioned folder structure (e.g. PartitionId=xxxx) Destination: Azure SQL Database Issue Observed in ADF When configuring ADF pipelines: Using ADLS Gen2 dataset with: Delta / Parquet Recursive folder traversal Wildcard paths We encounter: No data returned in Data Preview Or runtime error such as: “No partitions information found in metadata file” Despite this: The data is present in ADLS The same data can be successfully queried using Synapse serverless SQL Key Question for ADF / Synapse Engineers What is the recommended and supported ADF ingestion pattern for: Partitioned Delta/Parquet data produced by Azure Synapse Link for Dataverse Specifically: Should ADF: Read Delta tables directly, or Use Synapse serverless SQL external tables/views as an intermediate layer? Is there a reference architecture for: Synapse Link → ADLS → ADF → Azure SQL Are there ADF limitations when consuming Synapse Link–generated Delta tables? Many customers are now forced to migrate due to Export to Data Lake deprecation, but current ADF documentation does not clearly explain how to replace existing ingestion pipelines when using Synapse Link for FnO. Any guidance, patterns, or official documentation would be greatly appreciated.RaheelislamJan 08, 2026Copper Contributor64Views0likes0CommentsCopy Data Activity Failed with Unreasonable Cause
It is a simple set up but it has baffled me a lot. I'd like to copy data to a data lake via API. Here are the steps I've taken: Created a HTTP linked service as below: Created a dataset with a HTTP Binary data format as below: Created a pipeline with a Copy Data activity only as shown below: Made sure linked service and dataset all working fine as below: Created a Sink dataset with 3 parameters as shown below: Passed parameters from pipeline to Sink dataset as below: That's all. Simple, right? But the pipeline failed with a clear message "usually this is caused by invalid credentials." as below: Summary: No need to worry about the Sink side of parameters etc. which I have used same thing for years on other pipelines and all succeeded. This time the API failed to reach a data lake from source side as said "invalid credentials". In Step 4 above one could see the linked service and dataset connections were succeeded, ie. credentials have been checked and passed already. How come it failed in data copy activity complaining an invalid credentials? Pretty weird. Any advice and suggestions will be welcomed.AJ81Dec 02, 2025Copper Contributor69Views0likes0CommentsUser Properties of Activities in ADF: How to add dynamic content in it?
On ADF, I am using a for each loop in which I am using an Execute Pipeline Activity which is getting executed for different iterations as per the values of the items provided to the For-Each Loop. I am stuck on a scenario which requires me to add the Dynamic Content Expression in the User Properties of individual activities of ADF. Specific to my case, I want to add the Dynamic Content Expression in the User Properties of Execute Pipeline Activity so that I get to individual runs of these activities on Azure Monitor with a specific label attached to it through its User Properties. The necessity to add the Dynamic Content Expression in the User Properties is due to the reason that each execution in respective iterations of these activities corresponds to a particular Step from a set of Steps configured for the Data Load Job as a whole, which has been orchestrated through ADF. To identify the association with the respective Job-Step, I require to add Dynamic Content Expression in its User Properties. Any sort of response regarding this is highly appreciated. Thank You!manujNov 13, 2025Copper Contributor169Views1like0CommentsData flow sink to Blob storage not writing to subfolder
Hi Everybody This seems like it should be straightforward, but it just doesn't seem to work... I have a file containing JSON data, one document per line, with many different types of data. Each type is identified by a field named "OBJ", which tells me what kind of data it contains. I want to split this file into separate files in Blob storage for each object type prior to doing some downstream processing. So, I have a very simple data flow - a source which loads the whole file, and a sink which writes the data back to separate files. In the sink settings, I've set the "File name option" setting to "Name file as column data" and selected my OBJ column for the "Column Data", and this basically works - it writes out a separate file for each OBJ value, containing the right data. So far, so good. However, what doesn't seem to work is the very simplest thing - I want to write the output files to a folder in my Blob storage container, but the sink seems to completely ignore the "Folder path" setting and just writes them into the root of the container. I can write my output files to a different container, but not to a subfolder inside the same container. It even creates the folder if it's not there already, but doesn't use it. Am I missing something obvious, or does the "Folder path" setting just not work when naming files from column data? Is there a way around this?DuncanKingOct 14, 2025Copper Contributor63Views0likes0CommentsProblem with Linked Service to SQL Managed Instance
Hi I'm trying to create a linked Service to a SQL Managed Instance. The Managed Instance is configured with a Vnet_local endpoint If I try to connect with an autoresolve IR or a SHIR I get the following error The value of the property '' is invalid: 'The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net''. The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net' Is there a way to connect to it without resorting to a private endpoint? Cheers Alexalexp01482Sep 25, 2025Copper Contributor99Views0likes0CommentsADF connection issue with Cassandra
Hi, I am trying to connect a cassandra DB hosted in azure cosmos db. I created the linked service but getting below error on test connection. Already checked the cassandra DB and its public network access is set to all networks. Google suggested enabling SSL but there is no such option in linked service. Please help. Failed to connect to the connector. Error code: 'Unknown', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to the connector. Error code: 'InternalError', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to Cassandra server due to, ErrorCode: InternalError All hosts tried for query failed (tried 51.107.58.67:10350: SocketException 'A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied')panksumeSep 10, 2025Copper Contributor187Views1like1CommentUser configuration issue
Hi, I am getting below error "Execution fail against sql server. Please contact SQL Server team if you need further support. Sql error number: 3930. Error Message: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" I never faced this kind of error. Could anyone please let me know what i can do and i needs to do. I am beginner, please explain me. Thanks.Aviavi-123Aug 19, 2025Copper Contributor159Views1like1Comment
Tags
- azure data factory178 Topics
- Azure ETL47 Topics
- Copy Activity41 Topics
- Azure Data Integration41 Topics
- Mapping Data Flows28 Topics
- Azure Integration Runtime26 Topics
- ADF5 Topics
- azure data factory v23 Topics
- Data Flows3 Topics
- pipeline3 Topics