Recent Discussions
Copy Data Activity Failed with Unreasonable Cause
It is a simple set up but it has baffled me a lot. I'd like to copy data to a data lake via API. Here are the steps I've taken: Created a HTTP linked service as below: Created a dataset with a HTTP Binary data format as below: Created a pipeline with a Copy Data activity only as shown below: Made sure linked service and dataset all working fine as below: Created a Sink dataset with 3 parameters as shown below: Passed parameters from pipeline to Sink dataset as below: That's all. Simple, right? But the pipeline failed with a clear message "usually this is caused by invalid credentials." as below: Summary: No need to worry about the Sink side of parameters etc. which I have used same thing for years on other pipelines and all succeeded. This time the API failed to reach a data lake from source side as said "invalid credentials". In Step 4 above one could see the linked service and dataset connections were succeeded, ie. credentials have been checked and passed already. How come it failed in data copy activity complaining an invalid credentials? Pretty weird. Any advice and suggestions will be welcomed.5Views0likes0CommentsUser Properties of Activities in ADF: How to add dynamic content in it?
On ADF, I am using a for each loop in which I am using an Execute Pipeline Activity which is getting executed for different iterations as per the values of the items provided to the For-Each Loop. I am stuck on a scenario which requires me to add the Dynamic Content Expression in the User Properties of individual activities of ADF. Specific to my case, I want to add the Dynamic Content Expression in the User Properties of Execute Pipeline Activity so that I get to individual runs of these activities on Azure Monitor with a specific label attached to it through its User Properties. The necessity to add the Dynamic Content Expression in the User Properties is due to the reason that each execution in respective iterations of these activities corresponds to a particular Step from a set of Steps configured for the Data Load Job as a whole, which has been orchestrated through ADF. To identify the association with the respective Job-Step, I require to add Dynamic Content Expression in its User Properties. Any sort of response regarding this is highly appreciated. Thank You!27Views1like0CommentsDataflow snowflake connection issue
I'm trying to set up a sink to snowflake in dataflow, but when I test the connection it doesn't work It just comes out JDBC driver communication error, tried searching online and looking at documentation link but couldn't find anything about this issue. But this same dataset works fine outside of the dataflow, I can preview the data in same dataset: There seems to be issue with dataflow. Even when I try to execute the dataflow through pipeline the same error message comes up: Does anyone know how to solve this problem with dataflows? Also with the sink settings option if I select recreate table will it create the table in snowflake if it doesn't already exist? Trying to find an easy way to copy a lot of tables into snowflake without explicitly having to create the table first especially if the metadata is only known at runtime, the pipeline copy job doesn't work as the table has to exist before it can insert data into it, but dataflow seems promising if the connection actually works.274Views0likes3CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.Solved7.1KViews3likes16CommentsPrevent Accidental Deletion of an Instance in Azure Postgres
Did you know that accidental deletion of database servers is a leading source of support tickets? Read this blog post to learn how you can safeguard your Azure Database for PostgreSQL Flexible Server instances using ARM’s CanNotDelete lock — an easy best-practice that helps prevent accidental deletions while keeping regular operations seamless. 🌐 Prevent Accidental Deletion of an Instance in Azure PostgresIncreasing the number of scripts for Bicep deployments
We were quite surprised to run into a limit of 50 scripts per ADX cluster limit... Considering there is a limit of 10,000 databases, having a limit of 50 scripts seems wrong... Is this a bug? Anyway to increase it? Our use-case would really require that we can provision certain assets alongside our Bicep infra code. We give each customer their own database and provision certain defaults tables and table mapping assets there. We started to hit this 50 script limits just after 4 customers...913Views0likes2CommentsData flow sink to Blob storage not writing to subfolder
Hi Everybody This seems like it should be straightforward, but it just doesn't seem to work... I have a file containing JSON data, one document per line, with many different types of data. Each type is identified by a field named "OBJ", which tells me what kind of data it contains. I want to split this file into separate files in Blob storage for each object type prior to doing some downstream processing. So, I have a very simple data flow - a source which loads the whole file, and a sink which writes the data back to separate files. In the sink settings, I've set the "File name option" setting to "Name file as column data" and selected my OBJ column for the "Column Data", and this basically works - it writes out a separate file for each OBJ value, containing the right data. So far, so good. However, what doesn't seem to work is the very simplest thing - I want to write the output files to a folder in my Blob storage container, but the sink seems to completely ignore the "Folder path" setting and just writes them into the root of the container. I can write my output files to a different container, but not to a subfolder inside the same container. It even creates the folder if it's not there already, but doesn't use it. Am I missing something obvious, or does the "Folder path" setting just not work when naming files from column data? Is there a way around this?25Views0likes0CommentsSeptember 2025 Recap: What’s New with Azure Database for PostgreSQL
September 2025 Recap for Azure Database for PostgreSQL September was a big month for Azure Postgres! From the public preview of PostgreSQL 18 (launched same day as the community!) to the GA of Azure Confidential Computing and Near Zero Downtime scaling for HA, this update is packed with new capabilities that make PostgreSQL on Azure more secure, performant, and developer-friendly. 💡 Here’s a quick peek at what’s inside: PostgreSQL 18 (Preview) – early access to the latest community release on Azure Near Zero Downtime Scaling (GA) – compute scaling in under 30 seconds for HA servers Azure Confidential Computing (GA) – hardware-backed data-in-use protection PostgreSQL Discovery & Assessment in Azure Migrate (Preview) – plan your migration smarter LlamaIndex Integration – build AI apps and vector search using Azure Postgres VS Code Extension Enhancements – new Server Dashboard + Copilot Chat integration Catch all the highlights and hands-on guides in the full recap 👉 #PostgreSQL #AzureDatabase #AzurePostgres #CloudDatabases #AI #OpenSource #MicrosoftPostgreSQL 18 Preview on Azure Database for PostgreSQL
PostgreSQL 18 Preview on Azure Postgres Flexible Server We’re excited to bring the latest Postgres innovations directly into Azure. With PG18 Preview, you can already test: 🔹 Asynchronous I/O (AIO) → faster queries & lower latency 🔹 Vacuuming enhancements → less bloat, fewer replication conflicts 🔹 UUIDv7 support → better indexing & sort locality 🔹 B-Tree skip scan → more efficient use of multi-column indexes 🔹 Improved logical replication & DDL → easier schema evolution across replicas And that’s just the start — PG18 includes hundreds of community contributions, with 496 from Microsoft engineers alone 💪 👉 Try it out today on Azure Postgres Flexible Server (initially in East Asia), share your feedback, and help shape GA.Problem with Linked Service to SQL Managed Instance
Hi I'm trying to create a linked Service to a SQL Managed Instance. The Managed Instance is configured with a Vnet_local endpoint If I try to connect with an autoresolve IR or a SHIR I get the following error The value of the property '' is invalid: 'The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net''. The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net' Is there a way to connect to it without resorting to a private endpoint? Cheers Alex58Views0likes0CommentsAzure SQL server rollback itself?
We have an Azure SQL server. It is a datasource of a Power App canvas app. Today I connected to it with SSMS v19. First, I ran 'Begin tran' twice (is it a mistake?). Then 'Delete From dbo.table1 where ID=30' and another row with ID=31. Then I verified these 2 rows are deleted by 'Select * from dbo.table1' Finally, I ran 'Commit tran' I verified again above 2 rows are deleted by 'Select * from dbo.table1' However, there is no change in the Power App. So I reopen the SSMS and connect to the DB again. This time when I ran 'Select * from dbo.table1', the 2 rows are showing up. What could be the problem? Is it a bug in old version SSMS?38Views0likes0CommentsADF connection issue with Cassandra
Hi, I am trying to connect a cassandra DB hosted in azure cosmos db. I created the linked service but getting below error on test connection. Already checked the cassandra DB and its public network access is set to all networks. Google suggested enabling SSL but there is no such option in linked service. Please help. Failed to connect to the connector. Error code: 'Unknown', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to the connector. Error code: 'InternalError', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to Cassandra server due to, ErrorCode: InternalError All hosts tried for query failed (tried 51.107.58.67:10350: SocketException 'A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied')105Views1like1CommentAugust 2025 Recap: Azure Database for PostgreSQL
Here’s what’s new this month to help you build smarter and scale securely: Advisor performance tuning (GA): New insights on index scans, logging, stats, and connections Entra ID group login (Preview): Let users sign in with their own credentials (no need for login using group-ID). New region – Austria East: Lower latency + data residency options for Central Europe LangChain & LangGraph support: Use Azure PostgreSQL as a vector store for AI agents Active-active replication guide: Step-by-step walkthrough using pglogical Full details in monthly recap: https://techcommunity.microsoft.com/blog/adforpostgresql/august-2025-recap-azure-database-for-postgresql/4450527Help with Partial MongoDB Update via Azure Data Factory Data Flow
Hello, everyone! I have a complex question about how to perform a partial update on a MongoDB collection using Data Flow in Azure Data Factory. My goal is to modify only some nested fields without overwriting the entire document. My flow reads JSON files with the following structure: { "_id": { "$oid": "1xp3232to" }, "root_field": "root_value", "main_array": [ { "array_id": "id001", "status": "PENDING", "nested_array": [] } ], "numeric_value": { "$numberDecimal": "10.99" } } I need Data Flow to make two changes in a single run: Change the status field from "PENDING" to "SENT". Add a new object to the nested_array with the following data: event: "SENT" description: "FILE GENERATED" timestamp: (current date and time) system: "Sis Test" I've tried some expressions with update and append in the Derived Column transformation, but I can't get the syntax right to make both changes at the same time. My biggest concern is with the MongoDB Sink: how to configure it so that Data Flow performs a partial update and doesn't overwrite the entire document, losing root_field, numeric_value, etc.? My questions are: What is the correct expression for the Derived Column that makes these two nested modifications in a single step? How should I configure the MongoDB Sink to ensure the update is partial, using _id as the key? I really appreciate the community's help!151Views0likes1CommentReliable Interactive Resources for Dp300 exam
Hello everyone, I hope you're all having a great day! I wanted to reach out and start a discussion about preparing for the DP-300 (Azure Database Administrator) certification exam. I’ve been researching various resources, but I’m struggling to find reliable and interactive materials that truly help with exam prep. For those who have already passed the DP-300, could you share any interactive and trustworthy resources you used during your study? Whether it's courses, hands-on labs, or practice exams, I’d really appreciate your recommendations. Any advice on how to effectively prepare would be incredibly helpful! Thank you so much for your time reading this discussion and for sharing your experiences!482Views1like6Commentstimechart legend in Azure Data Explorer
Hi, I'm creating a timechart dashboard in Azure Data Explorer and facing an issue with the legend labels. The legends have extra prefixes and suffixes, such as "Endpoint" or "Count". How can I remove these and show only the actual value in the legend? Thank you!148Views0likes1CommentUser configuration issue
Hi, I am getting below error "Execution fail against sql server. Please contact SQL Server team if you need further support. Sql error number: 3930. Error Message: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" I never faced this kind of error. Could anyone please let me know what i can do and i needs to do. I am beginner, please explain me. Thanks.101Views1like1CommentHow to use existing cache for external table when acceleration in progress
I enabled query acceleration for my external table which binds a delta table (1TB) on ADLS. But acceleration progress needs 1.5 hours to complete. I found during acceleration in progress, querying on the table is quite slower than case when acceleration completed. How can I use the existing acceleration cache/index and after acceleration completed, Kusto will switch to new index?183Views1like1Comment
Events
Recent Blogs
- Co‑authored with angesalsaa Symptoms Customer attempted to restore a server configured with public access into a private virtual network. Restore operation failed with an error indicating u...Dec 03, 202516Views0likes0Comments
- Why This Approach? Migrating large objects (LOBs) between PostgreSQL servers can be challenging due to size, performance, and complexity. Traditional methods often involve exporting to files and re...Dec 02, 2025116Views2likes0Comments