Recent Discussions
Copy Activity Successful, But Times Out
This appears to be an edge case, but I wanted to share. A copy activity is successful, but times out. Duration is 1:58:55. Times out at 2:00:12. Runs a second time time and is successful, loading duplicate records. The duplicate records is the undesired result. Copy Activity General Timeout: 0.02:00:00 Retry: 2 Source mySQL Parameterized SQL Parameterized Sink Synapse SQL Pool Parameterized Copy method: COPY command Settings Use V2 Hiearchy storage for staging General Synapse/ADF Managed Network17Views0likes0CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so according to the docs, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.Solved3.6KViews3likes15CommentsAdvice requested: how to capture full SQL CDC changes using Dataflow and ADLS gen2
Hi, I'm working on a fairly simple ETL process using Dataflow in Azure Data Factory, where I want to capture the changes in a CDC-enabled SQL table, and store those in Delta Lake format in a ADLS gen2 sink. The resulting dataset will be further processed, but for me this is the end of the line. I don't have an expert understanding of all the details of the Delta Lake format, but I do know that I can use it to store changes to my data over time. So in the sink, I enabled all Update methods (Insert, Delete, Upsert, Update), since my CDC source should be able to figure out the correct row transformation. Key columns are set to the primary key columns in SQL. All this works fine as long as I configure my source to use CDC with 'netChanges: true'. That yields a single change row for each record, which is correctly stored in the sink. But I want to capture all changes since the previous run, so I want to set the source to netChanges: false. That yields rows for every change since the previous time the dataflow ran. But for every table that actually has records with more than one change, the dataflow fails saying "Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways." I take that to mean that my dataflow is, as it is, not smart enough to loop through all changes in the source, and apply them to the sink in order. So apparently something else has to be done. My intuition says that, since CDC actually provides all the metadata to make this possible, there's probably an out-of-the-box way to achieve what I want. But I can't readily find that magic box I should tick 😉. I can probably build it out 'by hand', by somehow looping over all changes and applying them in order, but before I go down that route, I came here to learn from the experts whether this is indeed the only way, or, preferably, that there is a neat trick I missed to get this done easily. Thanks so much for your advice! BR23Views0likes0CommentsPostgreSQL 17 General Availability with In-Place Upgrade Support
We’re excited to share that PostgreSQL 17 is now Generally Available on Azure Database for PostgreSQL – Flexible Server! This release brings community-driven enhancements including improved vacuum performance, smarter query planning, enhanced JSON functions, and dynamic logical replication. It also includes support for in-place major version upgrades, allowing customers to upgrade directly from PostgreSQL 11–16 to 17 without needing to migrate data or change connection strings. PostgreSQL 17 is now the default version for new server creations and major version upgrades. 📖 Read the full blog post: http://aka.ms/PG17 Let us know if you have feedback or questions!Solution: Handling Concurrency in Azure Data Factory with Marker Files and Web Activities
Hi everyone, I wanted to share a concurrency issue we encountered in Azure Data Factory (ADF) and how we resolved it using a small but effective enhancement—one that might be useful if you're working with shared Blob Storage across multiple environments (like Dev, Test, and Prod). Background: Shared Blob Storage & Marker Files In our ADF pipelines, we extract data from various sources (e.g., SharePoint, Oracle) and store them in Azure Blob Storage. That Blob container is shared across multiple environments. To prevent duplicate extractions, we use marker files: started.marker → created when a copy begins completed.marker → created when the copy finishes successfully If both markers exist, pipelines reuse the existing file (caching logic). This mechanism was already in place and worked well under normal conditions. The Issue: Race Conditions We observed that simultaneous executions from multiple environments sometimes led to: Overlapping attempts to create the same started.marker Duplicate copy activities Corrupted Blob files This became a serious concern because the Blob file was later loaded into Azure SQL Server, and any corruption led to failed loads. The Fix: Web Activity + REST API To solve this, we modified only the creation of started.marker by: Replacing Copy Activity with a Web Activity that calls the Azure Storage REST API The API uses Azure Blob Storage's conditional header If-None-Match: * to safely create the file only if it doesn't exist If the file already exists, the API returns "BlobAlreadyExists", which the pipeline handles by skipping. The Copy Activity is still used to copy the data and create the completed.marker—no changes needed there. Updated Flow Check marker files: If both exist (started and completed) → use cached file If only started.marker → wait and retry If none → continue to step 2 Web Activity calls REST API to create started.marker Success → proceed with copy in step 3 Failure → another run already started → skip/retry Copy Activity performs the data extract Copy Activity creates completed.marker Benefits Atomic creation of started.marker → no race conditions Minimal change to existing pipeline logic with marker files Reliable downstream loads into Azure SQL Server Preserves existing architecture (no full redesign) Would love to hear: Have you used similar marker-based patterns in ADF? Any other approaches to concurrency control that worked for your team? Thanks for reading! Hope this helps someone facing similar issues.15Views0likes0CommentsCopy Activity - JSON Mapping
Hello, I have created a copy activity in Azure synapse Analytics. I have a JSON file as an input and would like to unpack and save it as a csv file. I have tried several times but can not get the data in the correct output. The below is my input file: { "status": "success", "requestTime": "2025-06-26 15:23:41", "data": [ "Monday", "Tuesday", "Wednesday" ] } I would like to save it in the following output. status requestTime Data success 26/06/2025 15:23 Monday success 26/06/2025 15:23 Tuesday success 26/06/2025 15:23 Wednesday I am struggling to configure the mapping section correctly. I can not understand how to unpack the data array. The $['data'][0] gives me the first element. I would like to extract all elements in the format above. Any help would be appreciated.91Views0likes2CommentsExport to Excel is not working
Hi, After the recent Azure Data Explorer Web UI update, the "Export to Excel" feature is no longer functioning as expected. While it works for simple tables but it takes longer time than before, it fails for tables containing complex data outputs, such as Empty, Null, Array [], or JSON data. Clicking the "Export to Excel" option does not produce the expected results. Could you please investigate this issue and provide guidance on a resolution? Thank you,47Views1like3CommentsOracle 2.0 property authenticationType is not specified
I just published upgrade to Oracle 2.0 connector (linked service) and all my pipelines ran OK in dev. This morning I woke up to lots of red pipelines that ran during the night. I get the following error message: ErrorCode=OracleConnectionOpenError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message= Failed to open the Oracle database connection.,Source=Microsoft.DataTransfer.Connectors.OracleV2Core,''Type=System.ArgumentException, Message=The required property is not specified. Parameter name: authenticationType,Source=Microsoft.Azure.Data.Governance.Plugins.Core,' Here is the code for my Oracle linked service: { "name": "Oracle", "properties": { "parameters": { "host": { "type": "string" }, "port": { "type": "string", "defaultValue": "1521" }, "service_name": { "type": "string" }, "username": { "type": "string" }, "password_secret_name": { "type": "string" } }, "annotations": [], "type": "Oracle", "version": "2.0", "typeProperties": { "server": "@{linkedService().host}:@{linkedService().port}/@{linkedService().service_name}", "authenticationType": "Basic", "username": "@{linkedService().username}", "password": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "Keyvault", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().password_secret_name", "type": "Expression" } }, "supportV1DataTypes": true }, "connectVia": { "referenceName": "leap-prod-onprem-ir-001", "type": "IntegrationRuntimeReference" } } } As you can see "authenticationType" is defined but my guess is that the publish and deployment step somehow drops that property. We are using "modern" deployment in Azure devops pipelines using Node.js. Would appreciate some help with this!Solved285Views1like6CommentsBlob Storage Event Trigger Disappears
Yesterday I ran into an odd situation where there was a resource lock and I was unable to rename pipelines or drop/create storage event triggers. An admin cleared the lock and I was able to remove and clean up the triggers and pipelines. Today, when I try to recreate the blob storage trigger to process a file when it appears in a container, the trigger creates just fine but on refresh, it disappears. If I try to recreate it again with the same name as the one that went away ADF UI says it already exists. I cannot assign it to a pipeline because the UI does not see it. Any insight as to where it is, how I can see it, or even what logs would have such activity recorded to give a clue as to what is going on. This seems like a bug.12Views0likes0CommentsParameter controls are not showing Display text
Hi, After a recent update to the Azure Data Explorer Web UI, the Parameter controls are not displaying correctly. The Display Text for parameters is not shown by default; instead, the raw Value is displayed until the control is clicked, at which point the correct Display Text appears. Could you please investigate this issue and provide guidance on a resolution? Thank you,13Views0likes0CommentsJune 2023 Update: Azure Database for PostgreSQL Flexible Server Unveils New Features
The Azure Database for PostgreSQL Flexible Server's June 2023 update is live! Now enjoy: Easier major version upgrades with reduced downtime. Server recovery feature for dropped servers. A more user-friendly Connect experience. Improved server performance with new IO enhancements. Auto-growing storage and online disk resize, now in public preview. We also support minor versions PostgreSQL 15.2 (preview), 14.7, 13.10, 12.14, 11.19. Big thanks to our dedicated team! Check out our blog for more details: https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/june-2023-recap-azure-database-postgresql-flexible-server/ba-p/3868650July 2023 Recap: Azure Database PostgreSQL Flexible Server
July 2023 Recap: Azure Database PostgreSQL Flexible Server Support for PostgreSQL 15 is now available (general availability). Automation Tasks have been introduced for Streamlined Management (preview phase). Flexible Server Migration Tooling has been enhanced (general availability). Hardware Options have been expanded with the addition of AMD Compute SKUs (general availability). These updates represent substantial improvements in performance, scalability, and efficiency. Whether you are a developer, a Database Administrator (DBA), or an individual passionate about PostgreSQL, we trust that these enhancements will contribute positively to your experience with our platform. Should you find these updates valuable, we encourage you to engage with us through appropriate channels of communication. Thank you for your continued support and interest in Azure Database for PostgreSQL Flexible Server.Autoscaling with Azure: A Comprehensive Guide to PostgreSQL Optimization Using Azure Automation Task
Autoscaling Azure PostgreSQL Server with Automation Tasks Read our latest article detailing the power of Autoscaling the Azure Database for PostgreSQL Flexible Server using Azure Automation Tasks. This new feature can revolutionize how we manage resources, streamlining operations, and minimizing human error.August 2023 Recap: Azure Database for PostgreSQL Flexible Server
Absolutely thrilled to unveil our latest blog post, " 𝗔𝘂𝗴𝘂𝘀𝘁 𝟮𝟬𝟮𝟯 𝗥𝗲𝗰𝗮𝗽: 𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗳𝗼𝗿 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 ". This month is jam-packed with feature updates designed to amplify your experience! 1. 𝗔𝘂𝘁𝗼𝘃𝗮𝗰𝘂𝘂𝗺 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 - Elevate your database health with improved tools and metrics. 2. 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗗𝗡𝗦 𝗭𝗼𝗻𝗲 𝗟𝗶𝗻𝗸𝗶𝗻𝗴 - Simplify your server setup process for multiple networking models. 3. 𝗦𝗲𝗿𝘃𝗲𝗿 𝗣𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗲𝗻𝗵𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀 - Now view hidden parameters for better performance optimization. 4. 𝗦𝗶𝗻𝗴𝗹𝗲 𝘁𝗼 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗧𝗼𝗼𝗹𝗶𝗻𝗴 – Simplified migration experience with automated extension allow listing. Don't miss out! Read the full scoop here: August 2023 Recap: Azure Database for PostgreSQL Flexible ServerPostgreSQL 16 generally available (September 14, 2023)
Detailed Release Notes - https://www.postgresql.org/about/news/postgresql-16-released-2715/ How has PostgreSQL 16's new feature set changed the game for your database operations? Share your favorite enhancements and unexpected wins!835Views0likes0CommentsNovember 2023 Recap: Azure PostgreSQL Flexible Server
Excited to share our November 2023 updates for Azure Database for PostgreSQL Flexible Server Server Logs management has been streamlined for better monitoring and troubleshooting, along with customizable retention periods. Embracing the latest in security, we now support TLS Version 1.3, ensuring the most secure and efficient client-server communications. Migrations are smoother with our new Pre-Migration Validation feature, making your transition to Flexible Server seamless. Microsoft Defender integration, providing proactive anomaly detection and real-time alerts to safeguard your databases. Additionally, we've upgraded user and role migration capabilities for a more accurate and hassle-free experience. 👥 Link - https://lnkd.in/gMMGaiAK Stay tuned for more updates, and feel free to share your experiences with these new features!February 2024 Recap: Azure PostgreSQL Flexible Server
Azure database for PostgreSQL Flexible Server - Feb '24 Feature Recap: General Availability of 𝗣𝗿𝗶𝘃𝗮𝘁𝗲 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀 across all public Azure regions for secure, flexible connectivity. 𝗟𝗮𝘁𝗲𝘀𝘁 𝗲𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻 𝘃𝗲𝗿𝘀𝗶𝗼𝗻𝘀 to enhance your PostgreSQL performance and security. 𝗟𝗮𝘁𝗲𝘀𝘁 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝘀 𝗺𝗶𝗻𝗼𝗿 𝘃𝗲𝗿𝘀𝗶𝗼𝗻𝘀 (16.1, 15.5, 14.10, 13.13, 12.17, 11.22) now supported for automatic upgrades. Enhanced 𝗠𝗮𝗷𝗼𝗿 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗨𝗽𝗴𝗿𝗮𝗱𝗲 𝗟𝗼𝗴𝗴𝗶𝗻𝗴 for smoother upgrades. 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿 𝟬.𝟲.𝟬 introduced for better vector similarity searches. 𝗥𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝘁𝗲𝘅𝘁 𝘁𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗶𝗼𝗻 now available with Azure_AI extension. Easier 𝗢𝗻𝗹𝗶𝗻𝗲 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 from Single Server to Flexible Server in public preview. We recommend reading our latest blog post to explore these updates in detail - https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/february-2024-recap-azure-postgresql-flexible-server/ba-p/4089037March 2024 Recap: Azure PostgreSQL Flexible Server
Azure Database for PostgreSQL - Flexible Server March'24 Feature Recap 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗠𝗮𝗱𝗲 𝗘𝗮𝘀𝘆: Seamlessly transfer PostgreSQL instances with Migration Service (GA). 𝗟𝗮𝘁𝗲𝘀𝘁 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝘀 𝗠𝗶𝗻𝗼𝗿 𝗩𝗲𝗿𝘀𝗶𝗼𝗻𝘀: Automatically updated to include Postgres versions 16.2, 15.6, 14.11, 13.14, and 12.18. 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝘀 𝟭𝟲 𝗠𝗮𝗷𝗼𝗿 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗨𝗽𝗴𝗿𝗮𝗱𝗲: Test drive the newest features of PostgreSQL 16 with minimal disruption. 𝗔𝗜 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗥𝗲𝗮𝗹-𝗧𝗶𝗺𝗲: Integrate machine learning predictions directly within your database with Azure_AI. 𝗡𝗲𝘄 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗠𝗲𝘁𝗿𝗶𝗰: Monitor 'Database Size' for precise capacity planning and performance optimization. Team Microsoft delivered impactful sessions and engaged with the community in Bengaluru @ PGConf India. Check out our blog for a full rundown of March's updates and how they can empower your projects - https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/march-2024-recap-azure-postgresql-flexible-server/ba-p/4107275April 2024 Recap: Azure PostgreSQL Flexible Server
April 2024 Recap: Azure PostgreSQL Flexible Server - 𝗗𝗶𝘀𝗮𝘀𝘁𝗲𝗿 𝗥𝗲𝗰𝗼𝘃𝗲𝗿𝘆 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀: updates to read replicas ensure seamless disaster recovery with minimal downtime. - 𝗡𝗲𝘄 𝗣𝗴𝗕𝗼𝘂𝗻𝗰𝗲𝗿 𝟭.𝟮𝟮.𝟭: Experience enhanced connection pooling for better performance and stability. - 𝗘𝗮𝘀𝗶𝗲𝗿 𝗢𝗻𝗹𝗶𝗻𝗲 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻: Move from Single Server to Flexible Server without stopping your work. - 𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻 𝗨𝗽𝗱𝗮𝘁𝗲𝘀: keep your database extensions up to date with ease. - 𝗟𝗮𝘁𝗲𝘀𝘁 𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻 𝗩𝗲𝗿𝘀𝗶𝗼𝗻𝘀: From TimescaleDB to PostGIS, leverage the latest functionalities to enhance your data management. Check out our new April 2024 Recap blog post for all the details and see how these changes can help you.June 2024 Recap: Azure PostgreSQL Flexible Server
June 2024 Feature Recap for Azure Database for PostgreSQL Flexible Server 𝗠𝗮𝗷𝗼𝗿 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗨𝗽𝗴𝗿𝗮𝗱𝗲𝘀 𝗠𝗮𝗱𝗲 𝗦𝗶𝗺𝗽𝗹𝗲 - Seamlessly upgrade to PostgreSQL 16 with minimal downtime (Now GA). 𝗡𝗲𝘄 𝗠𝗶𝗻𝗼𝗿 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗨𝗽𝗱𝗮𝘁𝗲𝘀 - Keep your server updated with the latest PostgreSQL minor versions effortlessly. 𝗣𝗴𝘃𝗲𝗰𝘁𝗼𝗿 𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻 - Discover new capabilities in Pgvector v0.7.0, enhancing your database's performance (Now GA). 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀 - Enhanced tools and updates streamline your migration processes. 𝗥𝗲𝘃𝗮𝗺𝗽𝗲𝗱 𝗧𝗿𝗼𝘂𝗯𝗹𝗲𝘀𝗵𝗼𝗼𝘁𝗶𝗻𝗴 𝗚𝘂𝗶𝗱𝗲𝘀 - Newly updated guides to efficiently resolve database issues. 𝗘𝘅𝗽𝗮𝗻𝗱𝗲𝗱 𝗥𝗲𝗴𝗶𝗼𝗻𝗮𝗹 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 - Introducing support for China North 2 and China East 2 regions. 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗕𝗮𝗰𝗸𝘂𝗽 𝗢𝗽𝘁𝗶𝗼𝗻𝘀 - Long-term backup support now available for CMK-enabled servers (Preview). 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗣𝗢𝗦𝗘𝗧𝗧𝗘 𝟮𝟬𝟮𝟰 - Dive into the insights shared at Azure PostgreSQL sessions! Big shout out to our engineering and product teams for their fantastic work on these features! Dive deeper into the details on our blog post here.
Events
Recent Blogs
- Some days ago, we were working on a service request where our customer was experiencing an issue with their application while reading a CSV file from Azure Blob Storage and writing the data into an A...Jul 11, 202567Views0likes0Comments
- Why Custom Ports? By default, MySQL servers listen on port 3306. While this works well for most setups, some organizations need custom port configurations—for example, to comply with network securi...Jul 10, 202556Views0likes0Comments