Recent Discussions
Reliable Interactive Resources for Dp300 exam
Hello everyone, I hope you're all having a great day! I wanted to reach out and start a discussion about preparing for the DP-300 (Azure Database Administrator) certification exam. I’ve been researching various resources, but I’m struggling to find reliable and interactive materials that truly help with exam prep. For those who have already passed the DP-300, could you share any interactive and trustworthy resources you used during your study? Whether it's courses, hands-on labs, or practice exams, I’d really appreciate your recommendations. Any advice on how to effectively prepare would be incredibly helpful! Thank you so much for your time reading this discussion and for sharing your experiences!196Views1like3CommentsBest practice to integrate to Azure DevOps?
Different sources suggesting different recommendations regarding ADF and ADO integration. Some say to use 'adf_publish' branch, while some suggest to use 'main' branch to be source for triggering yaml pipelines and disabling 'Publish' function in ADF. I guess practices are changing and setup could be different. The problem is finding all this information on the Internet makes it so confusing. So, the question is what is the best practice now (taking into account all the latest changes in ADO) regarding branches? How you set up your ADF and ADO integrations?63Views0likes2CommentsAzure SQL DB Index Recommendations
For the SQL Azure Databases in multiple regions and multiple subscriptions, we keep getting recommendations/warning regarding the addition of indexes is required, but those recommendations are implemented already and those indexes exist. We understand that manually applied recommendations will remain active and shown in the list of recommendations for 24-48 hrs. before the system automatically withdraws them but it's a recurring issue for them and is causing lot of churn. Basically we want to understand if the recommendation has already been applied why is the base issue not resolved? Will they be there forever or is there a way to force the recommendation to be evaluated again. Thanks.670Views0likes1CommentQuery from Excel in Synapse Serverless
I'm trying to use the following code to read from an Excel file using Synapse Serverless. Is this even possible? SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0','Excel 12.0 Xml;HDR=YES;Database=https://container.blob.core.windows.net/datalake/Bronze/Manual/Testing 20230106.xlsx', 'SELECT * FROM [ProductList$]') AS ROWS; I'm getting an error near Openrowset which is less than helpful from the Synapse Serverless SQL engine953Views0likes1CommentAzure SQL Database
Hi, i currently have a Microsoft Access application that uses data, queries, forms, report and custom vb coding. I'm looking for a modern cloud solution that i can use to recreate my application and then be able to offer to my clients via web link. Is Azure SQL database a reasonable tool for this high level requirement?120Views0likes1CommentAZURE SQL DB Deadlock
I see more than 2000 deadlock everyday for my Azure SQL DB in deadlock metrics but at the end it is not causing any missing data or any transaction drop. My application not using any retry logic then how it is possible that deadlock is getting automatically resolved has no impact at all ?46Views0likes1CommentParameterization of Linked Services
I am trying to parameterize Linked Service in ADF. Probably got confused, and hope someone will make it clear. Two questions: I have two parameters: 'url' and 'secretName'. However, in ARM template I only see 'url' parameter, but not 'secretName'. Why 'secretName' is not parameterized? How do I supply a value for the 'url' parameter when I will deploy ARM template to another environment (let's say 'Test' environment)? These are files: Linked Service: { "name": "LS_DynamicParam", "properties": { "parameters": { "SA_URL": { "type": "String", "defaultValue": "https://saforrisma.dfs.core.windows.net/" }, "SecretName": { "type": "String", "defaultValue": "MySecretInKeyVault" } }, "annotations": [], "type": "AzureBlobFS", "typeProperties": { "url": "@{linkedService().SA_URL}", "accountKey": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "LS_AKV", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().SecretName", "type": "Expression" } } } } } ARMTemplatePArametersForFactory.json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "factoryName": { "value": "ADF-Dev" }, "LS_AKV_properties_typeProperties_baseUrl": { "value": "https://kv-forrisma.vault.azure.net/" }, "LS_MAINStorage_properties_typeProperties_connectionString_secretName": { "value": "storageaccount-adf-dev" }, "LS_DynamicParam_properties_typeProperties_url": { "value": "@{linkedService().SA_URL}" } } }24Views0likes0CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so according to the docs, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.Solved2.5KViews3likes14CommentsHow to Flatten Nested Time-Series JSON from API into Azure SQL using ADF Mapping Data Flow?
How to Flatten Nested Time-Series JSON from API into Azure SQL using ADF Mapping Data Flow? Hi Community, I'm trying to extract and load data from API returning the following JSON format into an Azure SQL table using Azure Data Factory. { "2023-07-30": [], "2023-07-31": [], "2023-08-01": [ { "breakdown": "email", "contacts": 2, "customers": 2 } ], "2023-08-02": [], "2023-08-03": [ { "breakdown": "direct", "contacts": 5, "customers": 1 }, { "breakdown": "referral", "contacts": 3, "customers": 0 } ], "2023-08-04": [], "2023-09-01": [ { "breakdown": "direct", "contacts": 76, "customers": 40 } ], "2023-09-02": [], "2023-09-03": [] } Goal: I want to flatten this nested structure and load it into Azure SQL like this: Expand table ReportDate Breakdown Contacts Customers 2023-07-30 (no row) (no row) (no row) 2023-07-31 (no row) (no row) (no row) 2023-08-01 email 2 2 2023-08-02 (no row) (no row) (no row) 2023-08-03 direct 5 1 2023-08-03 referral 3 0 2023-08-04 (no row) (no row) (no row) 2023-09-01 direct 76 40 2023-09-02 (no row) (no row) (no row) 2023-09-03 (no row) (no row) (no row)17Views0likes1CommentAnother Oracle 2.0 issue
It seemed like Oracle LS 2.0 was finally working in production. However, some pipelines have started to fail in both production and development environments with the following error message: ErrorCode=ParquetJavaInvocationException,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message: java.lang.ArrayIndexOutOfBoundsException:255 total entry:1 com.microsoft.datatransfer.bridge.parquet.ParquetWriterBuilderBridge.addDecimalColumn(ParquetWriterBuilderBridge.java:107) .,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,' When I revert the Linked Service version back to 1.0, the copy activity runs successfully. Has anyone encountered this issue before or found a workaround?46Views0likes0CommentsAuto update of table in target(Snowflake) when source schema changes(SQL).
Hi, So this is my use case: I have source as SQL server and target as Snowflake. I have dataflow in place to load historic and cdc records from sql to snowflake.I am using inline cdc option available in dataflow for cdc which uses sql's cdc functionality. Now the problem is some tables in my source have schema changes very often say once a month and I want the target tables to alter based on schema change. Note : 1. I could only found dataflow for loading since we dont have watermark columns in sql tables. 2.Recreating the table in target on each load is not an good option since we have billions of recors altogether . Please help me with solution on this . Thanks9Views0likes0CommentsError in copy activity with Oracel 2.0
I am trying to migrate our copy activities to Oracle connector version 2.0. The destination is parquet in Azure Storage account which works with Oracle 1.0 connecter. Just switching to 2.0 on the linked service and adjusting the connection string (server) is straight forward and a "test connection" is successful. But in a pipeline with a copy activity using the linked service I get the following error message on some tables. ErrorCode=ParquetJavaInvocationException,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message: java.lang.ArrayIndexOutOfBoundsException:255 total entry:1 com.microsoft.datatransfer.bridge.parquet.ParquetWriterBuilderBridge.addDecimalColumn(ParquetWriterBuilderBridge.java:107) .,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,' As the error suggests in is unable to convert a decimal value from Oracle to Parquet. To me it looks like a bug in the new connector. Has anybody seen this before and have found a solution? The 1.0 connector is apparently being deprecated in the coming weeks. Here is the code for the copy activity: { "name": "Copy", "type": "Copy", "dependsOn": [], "policy": { "timeout": "1.00:00:00", "retry": 2, "retryIntervalInSeconds": 60, "secureOutput": false, "secureInput": false }, "userProperties": [ { "name": "Source", "value": "@{pipeline().parameters.schema}.@{pipeline().parameters.table}" }, { "name": "Destination", "value": "raw/@{concat(pipeline().parameters.source, '/', pipeline().parameters.schema, '/', pipeline().parameters.table, '/', formatDateTime(pipeline().TriggerTime, 'yyyy/MM/dd'))}/" } ], "typeProperties": { "source": { "type": "OracleSource", "oracleReaderQuery": { "value": "SELECT @{coalesce(pipeline().parameters.columns, '*')}\nFROM \"@{pipeline().parameters.schema}\".\"@{pipeline().parameters.table}\"\n@{if(variables('incremental'), variables('where_clause'), '')}\n@{if(equals(pipeline().globalParameters.ENV, 'dev'),\n'FETCH FIRST 1000 ROWS ONLY'\n,''\n)}", "type": "Expression" }, "partitionOption": "None", "convertDecimalToInteger": true, "queryTimeout": "02:00:00" }, "sink": { "type": "ParquetSink", "storeSettings": { "type": "AzureBlobFSWriteSettings" }, "formatSettings": { "type": "ParquetWriteSettings", "maxRowsPerFile": 1000000, "fileNamePrefix": { "value": "@variables('file_name_prefix')", "type": "Expression" } } }, "enableStaging": false, "translator": { "type": "TabularTranslator", "typeConversion": true, "typeConversionSettings": { "allowDataTruncation": true, "treatBooleanAsNumber": false } } }, "inputs": [ { "referenceName": "Oracle", "type": "DatasetReference", "parameters": { "host": { "value": "@pipeline().parameters.host", "type": "Expression" }, "port": { "value": "@pipeline().parameters.port", "type": "Expression" }, "service_name": { "value": "@pipeline().parameters.service_name", "type": "Expression" }, "username": { "value": "@pipeline().parameters.username", "type": "Expression" }, "password_secret_name": { "value": "@pipeline().parameters.password_secret_name", "type": "Expression" }, "schema": { "value": "@pipeline().parameters.schema", "type": "Expression" }, "table": { "value": "@pipeline().parameters.table", "type": "Expression" } } } ], "outputs": [ { "referenceName": "Lake_PARQUET_folder", "type": "DatasetReference", "parameters": { "source": { "value": "@pipeline().parameters.source", "type": "Expression" }, "namespace": { "value": "@pipeline().parameters.schema", "type": "Expression" }, "entity": { "value": "@variables('sink_table_name')", "type": "Expression" }, "partition": { "value": "@formatDateTime(pipeline().TriggerTime, 'yyyy/MM/dd')", "type": "Expression" }, "container": { "value": "@variables('container')", "type": "Expression" } } } ] }Solved473Views0likes5CommentsOn-Prem SQL server db to Azure SQL db
Hi, I'm trying to copy data from on-prem sql server db to azure sql db using SHIR and ADF. I've stood up ADF, SQL server and a db in Azure. As per my knowledge we need to download SHIR download link and install SHIR in on-prem server and register that SHIR with ADF key. The On-prem SQL server has TCP/IP connection enabled. What other set up i need to do in on-prem server such as firewall, IP, port configurations? The on-prem sql server is in different network which is not connected to our network.29Views0likes1CommentOracle 2.0 property authenticationType is not specified
I just published upgrade to Oracle 2.0 connector (linked service) and all my pipelines ran OK in dev. This morning I woke up to lots of red pipelines that ran during the night. I get the following error message: ErrorCode=OracleConnectionOpenError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message= Failed to open the Oracle database connection.,Source=Microsoft.DataTransfer.Connectors.OracleV2Core,''Type=System.ArgumentException, Message=The required property is not specified. Parameter name: authenticationType,Source=Microsoft.Azure.Data.Governance.Plugins.Core,' Here is the code for my Oracle linked service: { "name": "Oracle", "properties": { "parameters": { "host": { "type": "string" }, "port": { "type": "string", "defaultValue": "1521" }, "service_name": { "type": "string" }, "username": { "type": "string" }, "password_secret_name": { "type": "string" } }, "annotations": [], "type": "Oracle", "version": "2.0", "typeProperties": { "server": "@{linkedService().host}:@{linkedService().port}/@{linkedService().service_name}", "authenticationType": "Basic", "username": "@{linkedService().username}", "password": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "Keyvault", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().password_secret_name", "type": "Expression" } }, "supportV1DataTypes": true }, "connectVia": { "referenceName": "leap-prod-onprem-ir-001", "type": "IntegrationRuntimeReference" } } } As you can see "authenticationType" is defined but my guess is that the publish and deployment step somehow drops that property. We are using "modern" deployment in Azure devops pipelines using Node.js. Would appreciate some help with this!Solved227Views1like5CommentsKQL Query output limit of 5 lakh rows
Hi , i have a kusto table which has more than 5 lakh rows and i want to pull that into power bi. When i run the kql query it gives error due to the 5 lakh row limit but when i use set notruncation before the query then i do not get this row limit error on power bi desktop but get this error in power bi service after applying incremental refresh on that table. My question is that will set notruncation always work and i will not face any error further for millions of rows and is this the only limit or there are other limits on ADE due to which i may face error due to huge volume of data or i should export the data from kusto table to azure blob storage and pull the data from blob storage to power bi. Which will be the best way to do it?deduplication on SAP CDC connector
I have a pipeline in Azure Data Factory (ADF) that uses the SAP CDC connector to extract data from an SAP S/4HANA standard extractor. The pipeline writes data to an Azure staging layer (ADLS), and from there, it moves data to the bronze layer. All rows are copied from SAP to the staging layer without any data loss. However, during the transition from staging to bronze, we observe that some rows are being dropped due to the deduplication process based on the configured primary key. I have the following questions: How does ADF prioritize which row to keep and which to drop during the deduplication process? I noticed a couple of ADF-generated columns in the staging data, such as _SEQUENCENUMBER. What is the purpose of these columns, and what logic does ADF use to create or assign values to them? Any insights would be appreciated.16Views0likes0CommentsPartitioning in Azure Synapse
Hello, Im currently working on an optimization project, which as led me down a rabbithole of technical differences between the regular MSSQL and the dedicated SQL pool that is Azure PDW. I noticed, that when checking the distributions of partitions, when creating a table, for lets say splitting data by YEAR([datefield]) with ranges for each year '20230101','20240101' etc, the sys partitions view claims that all partitions have equal amount of rows. Also from the query plans, i can not see any impact in the way the query is executed, even though partition elimination should be the first move, when querying with Where [datefield] = '20230505'. Any info and advice would be greatly appreciated.24Views0likes0CommentsNeed Urgent Help-Data movement from Azure Cloud to On premise databases
I have a requirement to transfer JSON payload data from Azure Service Bus Queue/Topics to an on-premises Oracle DB. Could I use an ADF Pipeline for this, or is there a simpler process available? If so, what steps and prerequisites are necessary? Please also mention any associated pros and cons. Additionally, I need to move data into on-premises IBM DB2 and MySQL databases using a similar approach. Are there alternatives if no direct connector is available? Kindly include any pros and cons related to these options. Please respond urgently, as an immediate reply would be greatly appreciated.35Views0likes1Comment
Events
Recent Blogs
- If you have been relying on Oracle Database as your primary system for analytics and the generation of MIS reports, you are probably familiar with the use of temporary tables within stored procedures...Jun 20, 2025155Views0likes0Comments
- 3 MIN READThe MSSQL Extension for VS Code continues to evolve, bringing powerful new features that make SQL development more local, more organized, and more intelligent. In version v1.33.0, we’re introducing L...Jun 18, 2025319Views0likes0Comments