Recent Discussions
Importing an ARM template into ADF DevOps
Hi everyone, requesting help in ADF and I am new to ADF & ADO. I have 2 ADF workspaces A & B; A is enabled with Git and B is in Live mode. All my resources like pipelines, data flows, datasets, linked services, credentials & triggers are present in Workspace B. I need to import these into collaboration branch of Workspace A. I tried copying the ARM Template to the git repo into the collaboration branch manually, but this doesn't show up anything in the Workspace A. Am I missing anything here? Any help is much appreciated! Thanks ln8Views0likes0CommentsServiceNow Connection - data request URL too long for pagination
Hi, So we've encountered an issue after setting up a connection between data-factory and ServiceNow. Our team has been trying to query a really big table (alm_asset) from our ServiceNow instance - and when we try to add Pagination to be anyhting but empty, for some reason DataFactory lists all of the columns to be queried. Now that column query list we couldn't find, and our REST request could not be executed because of the too long url, so pagination could not fit. The API request to ServiceNow failed. Request Url : -- removed -- , Status Code: BadRequest, Error message: {"error":{"message":"Pagination not supported","detail":"The requested query is too long to build the response pagination header URLs. Please do one of the following: shorten the sysparm_query, or query without pagination by setting the parameter 'sysparm_suppress_pagination_header' to true, or set 'sysparm_limit' with a value larger then 4182 to bypass the need for pagination."},"status":"failure"} This 4182 is just a on a sub-production instance, on produciton instance we have significantly more data. Can somebody help how to edit the params sent for the REST API through that connector?162Views0likes1CommentExtend the period of storing telemetry of all services of my subscription.
I have had now 4 support cases in which i see strange behavior of my services in the Azure cloud. For this i created support cases. We are a software company building solution on the Microsoft stack. In case of an incident we first look to our solution and will do an RCA. Sometimes the root cause is not related to our solution but to the Azure services. When seeing this we create a MS support case. It will take some time for MS to understand the case. A lot of time the case, is moved to different engineers. Because of this we run out of the 30 days for which telemetry of our services are available for the MS engineers. So we loose all interesting data to find the Root cause. Please extend the 30 days. This is really too short for analyzing situations. In our company we store telemetry information for 1 year. It is not acceptable for our customers that we do not know the root cause of the incident.5Views0likes0CommentsOData Connector for Dynamics Business Central
Hey Guys, I'm trying to connect Dynamics Business Central OData API in ADF but I'm not sure what I'm doing wrong here because the same Endpoint is returning data on Postman but returning an error in ADF LinkedService. https://api.businesscentral.dynamics.com/v2.0/{tenant-id}/Sandbox-UAT/ODataV4/Company('company-name')/Chart_of_Accounts17Views0likes0CommentsOptimizing Azure Database for MySQL and Ensuring Compatibility with SIM-Based Authentication
"I'm setting up Azure Database for MySQL and need some guidance on optimizing performance for high-traffic applications. Are there best practices for indexing and query optimization to avoid slowdowns? Also, does anyone know if using a cloud-based MySQL instance affects SIM-based authentication methods? I'm specifically working with Dito SIM and want to ensure compatibility. Any insights would be greatly appreciated!"26Views0likes0Commentshow to export all tables from database
I'm recently started working in Microsoft Synapse and exploring the templates from the gallery available in Synapse "Database templates" and want to export all tables e.g. Automotive. I’ve tried using the DESCRIBE command, but it only gives information about a single table. How can I write a SQL query to export all tables from the database template and export it to CSV? Is there a specific system view or query I should use in Synapse to achieve this? Any help would be appreciated. Thanks in advance!182Views0likes1CommentAccessing serverless sql pool tables from dedicated sql pool
I'm trying to access the tables available in the synapse serverless sql pool from the dedicated sql pool. I'd like to create some simple stored procedures to import data from delta/parquet tables that are mapped as external tables in the serverless sql pool and load them into some dedicated sql pool tables. Is there a simple way to do this without having to define external tables in the dedicated sql pool too? I tried this and there seem to be many limitations (delta not supported, etc.).17Views0likes0CommentsManage Pipelines (Start/Stop/Monitoring)
I cannot find a way to manage many pipelines for ETL. For example, in case of multiple pipelines execution, if i want to disable the execution of any pipelines - how can this be done ? Is there a tool by Microsoft or any third party tool, which can help manage, execution and monitoring of pipelines in ADF ? Also, are there any best practices or patterns to manage multiple pipelines ?7Views0likes0CommentsGitFlow possible with Synapse?
My team has been using synapse for some time across our dev, uat, and production environments. So far, they have not utilized any CD to deploy up environments but instead promote artifacts manually. The reason being is that their UAT environment is rarely ready to go to production. Features often sit in UAT for many months before deployment to production, but are required to be in UAT for testing with the full dataset. This seems to indicate the need for a branching strategy like GitFlow to allow for selective PRs using cherry-picking or git revert to allow for only ready features to production. Has anyone faced this issue or have any tips on how to resolve this challenge. It seems unlike traditional app development; feature flags don't solve the issue as they only really can work inside pipelines. Thanks!32Views0likes2CommentsIssue with mysql_request Plugin on Dashboard
Today, while using the Azure Data Explorer dashboard, we noticed that the mysql_request plugin is no longer functioning as expected. Specifically, we are encountering the following error message: evaluate mysql_request(): the following error(s) occurred while evaluating the output schema: The 'mysql_request' plugin cannot be used as the request property request_readonly_hardline is set. Interestingly, the plugin works perfectly fine within the Query tab. However, when called from the dashboard, it fails to execute. This issue was not present yesterday, as it was working seamlessly at that time. Looking further, I realize that in Query Tab, the ADX UI send with "Options":{ "request_readonly_hardline": false } And the request from Dashboard such as Table, Chart. It send with request_readonly_hardline: true Is this on purpose? Thanks11Views0likes0CommentsAzure Synapse: What’s Next?
With the recent introduction of Microsoft Fabric, which aims to unify various data and analytics workloads into a single platform, how will this impact the future of Azure Synapse Analytics? Specifically, will Azure Synapse Analytics become obsolete, or will it continue to play a significant role alongside Microsoft Fabric? Additionally, what are the recommended migration paths and considerations for organizations heavily invested in Azure Synapse Analytics?”158Views0likes1CommentAzure Synapse Issue
Hi, I have a question regarding the backend mechanism of Synapse Spark clusters when running pipelines. I have a notebook that installs packages using !pip since %pip is disabled during pipeline execution. I understand that !pip is a shell command that installs packages at the driver level. I’m wondering if this will impact other pipelines that are running concurrently but do not require the same packages. Thank you for your help.13Views0likes1CommentHow do I unpivot with schema drift enabled?
I have a source without a pre-defined schema. I derive each column name using a column pattern expression: Data preview shows what I expect (which is a file in a blob container): I then have a Select step that selects each column and renames 'Marker name' to 'Marker_name22': Data preview again shows what I expect (same columns with 'Marker name' renamed). Now in the unpivot step, I would like to ungroup by the 'Marker_name22' column and unpivot all other columns, but the 'Marker_name22' column is not available: I am unsure how to proceed from here. Thanks in advance for the help.26Views0likes0CommentsDifferent pools for workers and driver - in ADF triggered ADB jobs
Hello All, Azure Databricks allows usage of separate compute pools for drivers and workers when you create a job via the native Databricks workflows. For customers using ADF as an orchestrator for ADB jobs, is there a way to achieve the same when invoking notebooks/jobs via ADF? The linked service configuration in ADF seems to allow only one instance pool. Appreciate any pointers. Thanks !Solved39Views0likes1CommentHow do I create a flow that adapts to new columns dynamically?
Hello, I have files landing in a blob storage container that I'd like to copy to a SQL database table. The column headers of these files are date markers, so each time a new file is uploaded, a new date will appear as a new column. How can I handle this in a pipeline? I think I'll need to dynamically accept the schema and then use an unpivot transformation to normalize the data structure for SQL, but I am unsure how to execute this plan. Thanks!26Views0likes0CommentsGlobal secure access client doesn't connect to the Azure SQL Server engine
Unlike other resources such as keyvault, cosmosdb, virtual machine; it is not possible to connect to SQL Databases. Has anyone tried to achieve the connection without having to add the ip in the server engine firewall?25Views0likes0CommentsDocumentation Generator for Azure Data Factory
Hi I couldnot find a document generator in ADF ( unless I am missing something) , So I have written a utility script in Python for generating documentation ( sort of a quick reference ) from the ARM export. I am still in the process of refining it and adding more artifacts, but you can modify and extend it as required. I hope someone will find it useful, to get a quick overview of all the objects in the ADF. https://github.com/sanjayganvkar/AzureDataFactory---Documentation-Generator Regards Sanjay691Views0likes0Comments
Events
Recent Blogs
- What are Elastic Clusters? Elastic clusters on Azure Database for PostgreSQL provide a solution for horizontal scaling, allowing you to outgrow the constraints of a single node Azure Database for P...Feb 10, 202543Views0likes0Comments
- 4 MIN READWe’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until Fe...Feb 06, 202511KViews4likes9Comments