adf
18 TopicsParameterization of Linked Services
I am trying to parameterize Linked Service in ADF. Probably got confused, and hope someone will make it clear. Two questions: I have two parameters: 'url' and 'secretName'. However, in ARM template I only see 'url' parameter, but not 'secretName'. Why 'secretName' is not parameterized? How do I supply a value for the 'url' parameter when I will deploy ARM template to another environment (let's say 'Test' environment)? These are files: Linked Service: { "name": "LS_DynamicParam", "properties": { "parameters": { "SA_URL": { "type": "String", "defaultValue": "https://saforrisma.dfs.core.windows.net/" }, "SecretName": { "type": "String", "defaultValue": "MySecretInKeyVault" } }, "annotations": [], "type": "AzureBlobFS", "typeProperties": { "url": "@{linkedService().SA_URL}", "accountKey": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "LS_AKV", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().SecretName", "type": "Expression" } } } } } ARMTemplatePArametersForFactory.json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "factoryName": { "value": "ADF-Dev" }, "LS_AKV_properties_typeProperties_baseUrl": { "value": "https://kv-forrisma.vault.azure.net/" }, "LS_MAINStorage_properties_typeProperties_connectionString_secretName": { "value": "storageaccount-adf-dev" }, "LS_DynamicParam_properties_typeProperties_url": { "value": "@{linkedService().SA_URL}" } } }41Views0likes0CommentsGetting Project Server data using REST APIs in ADF
Hello, I have created a pipeline that copies Project Server data from REST APIs to a JSON Datalake and then to my SQL Server database (SSMS). I was inspired by the following blog: https://techcommunity.microsoft.com/blog/projectsupport/reading-project-online-odata-with-azure-data-factory/3616300 I built the pipeline like in the blog and it looks like this: The first 5 activities retrieve login information necessary to access Project Server. All that information is brought together with a concat in the next step. I also have a access token (bearer token). Next, I have three different copy steps, one for each table of Project Server data that I want to copy (Projects, Resources & Assignments). Each table has its own REST API. The Projects REST API look like this: The other two are the same only with ‘Assignments’ and ‘Resources’ at the end. The Copy activity looks like this: The Sink is a JSON Datalake. For the mapping part, I only put the ProjectId column for testing purposes. My main issue is the amount of rows that the API’s copy. Firstly, the Projects API copies only copies exactly 300 rows, meanwhile it should be 626. The Resources API copies way more rows: 838, which is the right amount. The Assignments API copies exactly 1000 rows. The Assignments are a large Fact table of more than 40 000 rows. Another issue is the rest of the output I receive from the Copy activity. The output does not give any info about ‘offset’ or ‘pages’, so I also cannot use pagination. I tried to work around this with adding "?top=1000" to the API URL and by using relative URL’s, an Until loop, but nothing really seems to work. Does someone have experience in this niche issue? I haven’t found much documentation (the blog also did not mention anything about the count of rows) and I am new to ADF so any help is more than welcome! Thank you in advance!320Views0likes2CommentsHow to Enable Zone Redundancy in Azure Data Factory
Issue Statement With the adoption of Azure Data Factory (ADF), it has become increasingly popular across industries. However, data redundancy has become a major concern. Some Cloud Solution Architects (CSAs) are not very clear on this issue conceptually and architecturally, leading to prolonged discussions and arguments. This blog aims to clear up some confusion and accelerate the adoption of ADF. I might write a separate blog to cover other solution areas such as storage, databases, API management, and so on. ADF Data Redundancy ADF does not support zone redundancy; it only has built-in regional redundancy, which is completely managed by Microsoft. We cannot enable or disable this feature. Microsoft's official documentation states: “Azure Data Factory data is stored and replicated in the paired region to protect against metadata loss. During regional datacenter failures, Azure may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory.” This should clear up a lot of confusion that most CSAs are facing. This is why we do not see any option for configuring zone redundancy when creating new data factory resources. We cannot enable or disable zone redundancy for an existing data factory. How to Implement Zone Redundancy Using Source Control in Azure Data Factory To ensure you can track and audit changes made to your metadata, consider setting up source control for your Azure Data Factory. This will also enable you to access your metadata JSON files for pipelines, datasets, linked services, and triggers. Azure Data Factory allows you to work with different Git repositories (Azure DevOps and GitHub). Through source control you can easily implement zone level or regional level redundancy manually whenever an outage occurs. The details are covered in this article. . Monitoring New Releases I will closely monitor new releases of ADF to see if we can have the option to configure zone or broader level data redundancy.Parsing error while calling service now table api
Hi, I am trying to copy data from service now to blob storage using service now table https://www.servicenow.com/docs/bundle/xanadu-api-reference/page/integrate/inbound-rest/concept/c_TableAPI.html . While calling from postman, I am getting json output with data, but when I configure the same in ADF using ServiceNow linked service as mentioned https://learn.microsoft.com/en-us/azure/data-factory/connector-servicenow?tabs=data-factory, I getting this error "Failed to parse the API response. Failed at: TableAPIClient FetchTableDataAsync Error message: Unexpected character encountered while parsing value: }. Path 'result[3716]', line 1, position 17316023". how I can resolve this error?97Views0likes0CommentsProcess your data in seconds with new ADF real-time CDC
In January, we announced that we've elevated our Change Data Capture features front-and-center in ADF. Up until just today, the lowest latency we were allowing for CDC processing was 15 minutes. But today, I am super-excited to announce that we have enabled the real-time option!25KViews12likes7CommentsMemory errors during data extraction from SAP using Azure Data Factory SAP Table connector
Azure Data Factory (ADF) is a fully managed data integration service for cloud-scale analytics in Azure. ADF provides more than 90 out of the box connectors to integrate with your source and target system. When we think about enterprise systems, SAP play a major role.Using Azure Data Factory orchestrating Kusto query-ingest
In this blog post, we’ll explore how Azure Data Factory (ADF) can be used for orchestrating large query ingestions. With this approach you will learn, how to split one large query ingests into multiple partitions, orchestrated with ADF.7.8KViews3likes1Comment