Recent Discussions
PostgreSQL - Flexible Compute - VNet DNS Update
Hello, I have a customer planning to update the VNet DNS from custom to Default. The client is running a PostgreSQL - Flexible server with VNet integration. Does the DNS change impact PostgreSQL - Flexible compute? What is the best way to update the PostgreSQL - Flexible compute without having to restart? Thanks, Terru28Views0likes2CommentsAZURE SQL DB Deadlock
I see more than 2000 deadlock everyday for my Azure SQL DB in deadlock metrics but at the end it is not causing any missing data or any transaction drop. My application not using any retry logic then how it is possible that deadlock is getting automatically resolved has no impact at all ?9Views0likes0CommentsClarification on Staging Directory Usage for SAP CDC Connector in Azure Data Factory
Hi! I'm currently working on a project where we are ingesting data from SAP using the SAP CDC connector in Azure Data Factory(Data flow). The source is S4HAHA CDS views. We are using a staging directory for the data flow with a checkpoint mechanism, similar to described here: https://learn.microsoft.com/en-us/azure/data-factory/connector-sap-change-data-capture My question is: Does the staging directory only act as a temporary storage location during ingestion from sap? If i understand correctly its used for retries, but no real usage once the deltas have been ingested. After the data has been loaded to the destination(in our case container inside of ADLS), is the data needed for maintaining delta states? Can the data be safely deleted(from the staging container) without impacting the subsequent load runs? We were thinking of implementing a 7 day retention policy on the staging container so we can manage storage efficiently. Thank you in advance for any information regarding this.14Views0likes0CommentsExecuting Oracle Stored Procedure in Azure Data Factory
Hello There, I want to execute(call) an Oracle Stored Procedure from Azure Data Factory (ADF). The Stored Procedure resides in an On-prem oracle server and I was able to establish the connectivity via the Self Hosted Integration Run Time. The oracle stored procedure has 1 input parameter (user defined datatype of object) and 1 output parameter (Varchar2 datatype). Can you please navigate me in terms of which activity to use in ADF to call the Oracle Stored Procedure and show how it is done in terms of variable declarations and the call command. A walk through example with screenshots would be really helpful. Also, I need to capture the Oracle Stored Procedure Output parameter value in ADF so that I can use that to control the pipeline flow in the subsequent steps. Thank You413Views1like2CommentsAzure Synapse: What’s Next?
With the recent introduction of Microsoft Fabric, which aims to unify various data and analytics workloads into a single platform, how will this impact the future of Azure Synapse Analytics? Specifically, will Azure Synapse Analytics become obsolete, or will it continue to play a significant role alongside Microsoft Fabric? Additionally, what are the recommended migration paths and considerations for organizations heavily invested in Azure Synapse Analytics?”88Views0likes0CommentsWhat is the way to use OUTPUT parameter for an Oracle Stored procedure in ADF pipelines?
I have a oracle database package and i am trying to call a stored procedure inside that package. The procedure has a OUT parameter which we want to use in the activities further in ADF pipelines. But ADF pipelines does not have a way to get the OUT parameter values and use it in pipeline. This is a very important feature.102Views0likes1CommentUsing "Expect Unique" assert type on ADF flow with additional streams
Hello, I'm trying to use ADF functionality assert with an additional stream but it seems it doesn't work. It recognizes the column name and I can select it but when I try a data preview, I get a "Resolved attribute(s) [...] missing from (list of attributes coming from main source) in operator !Project (list of attributes from main source + that attribute)" The attribute does exist in my source, I can see it in the data preview, and use it in lookup. I tried other attributes with same result. The documentation is not very clear on whether or not this scenario is possible. I'm guessing not but would like confirmation. Thank you.14Views0likes0CommentsDatabase connection
Hello, I have created a Linked Service to connect to a DB but the connection is failing due to firewall issue. I've whitelisted all the IPs for the region in which my resource is present but it's still failing and the IP in error is of another region. Why is this happening?13Views0likes0CommentsWriting data to a lookup field via alternative keys
Hi Folks, I have 2 table, 1. Contact with a secondary key 2. Policyholder with a lookup field to contact and a column that contain the contact secondary key guid as a string I want to take the column named : contactSecondaryKeyString in policyholder which contain the secondary key string(guid) and push it to my lookup field policyholder in the table policy holder. As per microsoft it is possible :Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM - Azure Data Factory & Azure Synapse | Microsoft Learn Here is my mapping code : and I get the error : "ErrorCode=DynamicsOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=,Source=,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Dynamics operation failed with error code: -2147217149, error message: 'prsc_phplan' entity doesn't contain attribute with Name = 'prsc_policyholder@prsc_systemid' and NameMapping = 'Logical' I tried many variation and still can't figure it out...1.6KViews0likes3CommentsWhat Synapse Serverless SQL pool authentication type for ADF Linked Service?
Hi, I'm relatively new to Azure Data Factory and require your guidance on how to successfully create/test a Linked Service to the Azure Synapse Analytics Serverless SQL pool. In the past, I've successfully created a Linked Service to a third-party (outside our domain) on-premises SQL Server through creating a self-hosted integration runtime on their box and then creating a Linked Service to use that. The Server Name, Database Name, Windows authentication, my username and password all configured by the third-party is what I entered into the Linked Service configuration boxes. All successfully tested. This third-party data was extracted and imported, via ADF Pipelines, into an Azure SQL Server database within our domain. Now I need to extract data from our own (hosted in our domain) Azure Synapse Analytics Serverless SQL pool database. My attempt is this, and it fails: 1) I create a 'Azure Synapse Analytics' Data Store Linked Service. 2) I select the 'AutoResolveIntegrationRuntime' as the runtime to use - I'm thinking this is correct as the Synapse source is within our domain (we're fully MS cloud based). 3) I select 'Enter manually' under the 'Account selection method'. 4) I've got the Azure Synapse Analytics Serverless SQL endpoint - which I place into the 'Fully qualified domain name' field. 5) I entered the data SQL Database name found under the 'SQL database' node/section present on the Data >> Workspace screen in Synapse. 6) I choose 'System-assigned managed identity' as the Authentication type - this is a guess and I was hoping it would recognised my username/account that I am building the Linked Service with, as that account also can query Synapse too and so has Synapse access. 7) I check the 'Trust server certification' box. All else is default. When I click test connection, it fails with the following message: "Cannot connect to SQL Database. Please contact SQL server team for further support. Server: 'xxxxxxxxxxxx-ondemand.sql.azuresynapse.net', Database: 'Synapse_Dynamics_data', User: ''. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access. Login failed for user '<token-identified principal>'." I've reached out to our I.T. (who are novices with Synapse, ADF, etc.. even though they did install them in our domain) and they don't know how to help me. I'm hoping you can help. 1) Is choosing the 'Azure Synapse Analytics' the correct Data Store to chose when looking extract data from an Azure Synapse Serverless SQL pool SQL database? 2) Is using the AutoResolveIntegrationRuntime correct if Synapse is held within our domain? I've previously confirmed this runtime works (and still does) as when importing the third-party data I had to use that runtime to load the data to our Azure SQL Server database. 3) Have I populated the correct values for the 'Fully qualified domain name' and 'Database name' fields by entering the Azure Synapse Analytics Serverless SQL endpoint and subsequent SQL Database name, respectively? 4) Is choosing 'System-assigned managed identity' as the Authentication type correct? I'm guessing this could be the issue. I selected this as when loading the mentioned third-party data into the Azure SQL Server database, within our domain, this was the authentication type that was used (and works) and so I'm assuming it somehow recognises the user logged in and, through the magic of cloud authentication, says this user has the correct privileges (as I should have the correct privileges so say I.T.) so allow the Linked Service to work. Any guidance you can provide me will be much appreciated. Thanks.Extending by a function output
datatable(ids: dynamic) [ dynamic(["value1", "value2"]) ] | function(ids) This above snippet works fine and returns a table datatable(date: datetime, ids: dynamic) [ datetime(2022-01-01), dynamic(["value1", "value2"]), datetime(2022-01-02), dynamic(["value3", "value4"]) ] | extend outputs = function(ids) This one however complains that extend expects a scalar and not table that the function returns datatable(date: datetime, ids: dynamic) [ datetime(2022-01-01), dynamic(["value1", "value2"]), datetime(2022-01-02), dynamic(["value3", "value4"]) ] | extend outputs = toscalar(function(ids)) When using toscalar, ids cannot be referenced. Is there a workaround? The function take in dynamic and returns a tubular expression of one row and two columns139Views0likes3CommentsAzure SQL programmatic access
I work with a team of statisticians who would like to programmatically access an Azure SQL database via R/R Studio without human interaction. Access is setup currently with Microsoft Entra with MFA. The team ideally want to run their R script and connect to the database, without having to be there to enter their login credentials and MFA, can this be done via a key or SAS token? What is the best or recommended method to do this? Thank you. Z15Views0likes0CommentsImporting bacpac to Azure database fails
I am trying to import a database from a bacpac file to our azure sql server. I am following these steps: Creating a bacpac file from our production database running on an MS SQL Server 2022. Importing it into azure using sqlpackage But, it keeps failing with *** An unexpected failure occurred: Data plan execution failed with message One or more errors occurred. (One or more errors occurred. (One or more errors occurred. (Violation of PRIMARY KEY constraint 'PK_ForexQuote'. Cannot insert duplicate key in object 'dbo.ForexQuote'. The duplicate key value is (1). The statement has been terminated.))). The ForexQuote table DOES NOT have any duplicate primary keys. If I omit this table in my bacpac export I get a similar error on another table. I dont understand. What is going on here?131Views0likes5CommentsSend files from Blob Storage to Vendor API
Hello, In an Azure Blob container in our tenant we have several thousand .json files that need to be ingest by a vendor search api. The vendor example is to send as PUT /v2/indices/{index public key}/documents/{document id} to their api. My background with ADF is copyfiles from a local fileshare to blob. I just copy with source and sink - works Never having done this before and using Copilot it suggested creating a pipeline, using the Get Metadata activity. I did that. The setting of Get Metadata point to a Dataset. (see attachments for images - not the blob setting show successful connection and preview) at this point I just tried to debug it and got this message: Error code 2001 Failure typeUser configuration issue Details The length of execution output is over limit (around 4MB currently) Activity ID b74f3802-804f-4b04-91c2-3f68546a40a5 Each files is about 20KB, but I suspect it is trying to get all the files as one. If this is the case, how do I get it to iterate one by one? Copilot said to use a Filter activity, but that is AFTER the Get Metadata statement. Any help on how to proceed OR troubleshoot this better? Thanks, V24Views0likes1CommentAzure Sql Database Linked Service Lookup Activity Error
Started receiving this error when previewing results in both lookup and copy activities that are connected to an Azure SQL Database. My production pipeline (that is triggered) fails on the Azure Database lookup activity. Does anyone hav any insights on what is causing this issue? And/or how to resolve? Thanks!15Views0likes1CommentServiceNow Connection - data request URL too long for pagination
Hi, So we've encountered an issue after setting up a connection between data-factory and ServiceNow. Our team has been trying to query a really big table (alm_asset) from our ServiceNow instance - and when we try to add Pagination to be anyhting but empty, for some reason DataFactory lists all of the columns to be queried. Now that column query list we couldn't find, and our REST request could not be executed because of the too long url, so pagination could not fit. The API request to ServiceNow failed. Request Url : -- removed -- , Status Code: BadRequest, Error message: {"error":{"message":"Pagination not supported","detail":"The requested query is too long to build the response pagination header URLs. Please do one of the following: shorten the sysparm_query, or query without pagination by setting the parameter 'sysparm_suppress_pagination_header' to true, or set 'sysparm_limit' with a value larger then 4182 to bypass the need for pagination."},"status":"failure"} This 4182 is just a on a sub-production instance, on produciton instance we have significantly more data. Can somebody help how to edit the params sent for the REST API through that connector?67Views0likes0CommentsAzure Data Factory Web API call to Azure REST API Slow
I can reproduce this easily. Call to Azure REST API to publish data to Event Hub using SAS authorization takes few milliseconds from any REST client. Create Event hub namespace URI and Event Hub Name Copy the URI and RootManageSharedAccessKey Create authorization for REST API https://learn.microsoft.com/en-us/rest/api/eventhub/generate-sas-token#code-try-8 Send data to Event Hub via REST API using Authorization with Postman or Thunder client https://learn.microsoft.com/en-us/rest/api/eventhub/send-event Create Azure Data Factory Create foreach activity containing web activity and iterate over an array of four items in parallel Web activity task takes over 3 to 4 seconds when called from Azure Data Factory Web activity. Any ideas are welcome.28Views0likes0CommentsError 403 on random occasions
Getting the following error on random occasions since yesterday. It may be related to other issues I am seeing at the same time, such as the Spark pool suddenly not starting up and the splash screen getting stuck with 401 (unauthorized) error. This is not related to our configuration of access. This has been working before. It also works when I try some times, and other times not. It seems completely random. Azure Service Health is just showing green (as always).10Views0likes1CommentSynapse Studio stuck on loading the splash screen with '401 unauthorized'
Synapse Studio has been stuck on loading for two days in a row. On a few occasions, I have been able to access Synapse, but most often I am not able to. It's just getting stuck on the splash screen when starting styudio. On yesterday evening, I was able to work as normal, but in the morning the issue was back again. This happens on different instances and resource groups where Synapse is installed. I have tried both Edge and Chrome. Same issue both places We have tried on different accounts where we know we have working access I have checked Azure health status, and Synapse shows up falsely as green. When I look in the console log for my web browser, I see 'Failed to load resource: the server responded with a status of 401 ()'. Meaning there is some kind of authorization issue for URL https://management.azure.com/subscriptions/....?api-version=2025-05-01 (I have deliberately removed the hexadecimal subscription ID).59Views0likes1CommentFailure of azure data factory integration runtime with Vnet enabled
I had been using Data Factory's integration runtime with VNet successfully, but it recently stopped connecting to Cosmos DB with the MongoDB API (which is also within a VNet). After setting up a new integration runtime with VNet enabled and selecting the region as 'Auto Resolve,' the pipeline ran successfully with this new runtime. Could you help me understand why the previous integration runtime—configured with VNet enabled and the region set to match that of Azure Data Factory—worked for over a month but then suddenly failed? The new integration runtime with VNet and 'Auto Resolve' region worked, but I'm uncertain if the 'Auto Resolve' region contributed to the success or if something else allowed it to connect. Error:Failure happened on 'Source' side. ErrorCode=MongoDbConnectionTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=>Connection to MongoDB server is timeout.,Source=Microsoft.DataTransfer.Runtime.MongoDbAtlasConnector,''Type=System.TimeoutException,Message=A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/cosmontiv01u.mongo.cosmos.azure.com:10255" }", EndPoint:17Views0likes0Comments
Events
Recent Blogs
- What do the new SQL 2022 CDC extended events track? CDC jobs include: Capture Job - Captures changes into capture table. Clean-up Job - Removes older data from capture table. Each eve...Dec 10, 202411KViews1like0Comments
- We are pleased to announce a limited preview of a new query hint, ABORT_QUERY_EXECUTION. The hint is intended to be used as a Query Store hint to let administrators block future execution of known pr...Dec 10, 20241.8KViews4likes1Comment