data factory
25 TopicsHTTP request was forbidden with client authentication when connecting to dataverse
Hi, I'm trying to setup a linked Service that connects to the dataverse. This dataverse was created from the Microsoft Developer Program. As I need multi factor authentication to log in, I tried using authentication type Service Principal. When I tested the connection, I get the following error: The HTTP request was forbidden with client authentication scheme 'Anonymous'. => The remote server returned an error: (403) Forbidden.Unable to Login to Dynamics CRM Below is what I used for the settings (data has been masked): In the Azure AD registration, I have assigned the following API permissions: I have created a certificate & secret for this application and used the secret value as the "service principal key" (I used the Application (client) ID for the "service principal ID"): I tried the secret ID as well, but I always get the same error. I was wondering if there is another section I need to look to fix the problem? Would it be because the dataverse is a developer version? Jason7.7KViews0likes3CommentsDiscover the Future of Data Engineering with Microsoft Fabric for Technical Students & Entrepreneurs
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. This makes it an ideal platform for technical students and entrepreneurial developers looking to streamline their data engineering and analytics workflows.6.1KViews4likes1CommentData Factory REST API Parameters in URL
Hi there, In Azure Data Factory I want to get data from a REST API and push it to a Azure Table Storage. My challenge is I need to make a call to two different API endpoints to get the details of each purchase order. First I get the header level information from the Purchase List endpoint, no problem doing that. Sample API endpoint used: https://private-anon-92de43072c-dearinventory.apiary-mock.com/ExternalApi/v2/purchaseList Purchase List documentation here: https://dearinventory.docs.apiary.io/#reference/purchase/purchase-list/get Then I need to make a call to the Purchase Order endpoint using the ID from the Purchase List to get the detailed purchase order information. Purchase List [ID] > Purchase Order [TaskID], where ID = TaskID https://private-anon-92de43072c-dearinventory.apiary-mock.com/ExternalApi/v2/purchase/order?TaskID=TaskID So my question is, how do I grab the ID from the Purchase List and pass it to the second API endpoint in a for-each within the Purchase Order API URL? Purchase Order API documentation: https://dearinventory.docs.apiary.io/#reference/purchase/purchase-order/get Where does the TaskID get defined and how to use it? Against the Linked Services? The Dataset? Pipeline? Any help is much appreciated. Purchase Order sample of what gets returned: { "Total": 2, "Page": 1, "PurchaseList": [ { "ID": "60b75408-f432-407d-bc69-66a0df49bc4c", "BlindReceipt": false, "OrderNumber": "PO-00026", "Status": "INVOICED", "OrderDate": "2018-02-22T00:00:00Z", "InvoiceDate": "2018-02-22T00:00:00Z", "Supplier": "Bayside Club", "SupplierID": "807b6bb2-69d5-46dc-8a6b-1bfe73519dca", "InvoiceNumber": "345345", "InvoiceAmount": 15, "PaidAmount": 15, "InvoiceDueDate": "2018-03-24T00:00:00Z", "RequiredBy": null, "BaseCurrency": "USD", "SupplierCurrency": "USD", "CreditNoteNumber": "CR-00026", "OrderStatus": "AUTHORISED", "StockReceivedStatus": "DRAFT", "UnstockStatus": "NOT AVAILABLE", "InvoiceStatus": "PAID", "CreditNoteStatus": "DRAFT", "LastUpdatedDate": "2018-04-19T04:03:06.52Z", "Type": "Simple Purchase", "CombinedInvoiceStatus": "INVOICED", "CombinedPaymentStatus": "PAID", "CombinedReceivingStatus": "NOT RECEIVED", "IsServiceOnly": false, "DropShipTaskID": null }, { "ID": "4e60f55a-1690-4024-a58c-29d62106a645", "BlindReceipt": false, "OrderNumber": "PO-00080", "Status": "COMPLETED", "OrderDate": "2018-04-12T00:00:00Z", "InvoiceDate": "2018-04-10T00:00:00Z", "Supplier": "Bayside Wholesale", "SupplierID": "ca1e53f5-0560-4de6-8956-fac50a32540b", "InvoiceNumber": "INV-00080", "InvoiceAmount": 54.31, "PaidAmount": 0, "InvoiceDueDate": "2018-05-10T00:00:00Z", "RequiredBy": null, "BaseCurrency": "USD", "SupplierCurrency": "USD", "CreditNoteNumber": "CR-00080", "OrderStatus": "AUTHORISED", "StockReceivedStatus": "AUTHORISED", "UnstockStatus": "AUTHORISED", "InvoiceStatus": "PAID", "CreditNoteStatus": "AUTHORISED", "LastUpdatedDate": "2018-04-12T05:36:13.177Z", "Type": "Simple Purchase", "CombinedInvoiceStatus": "INVOICED / CREDITED", "CombinedPaymentStatus": "PAID", "CombinedReceivingStatus": "FULLY RECEIVED", "IsServiceOnly": false, "DropShipTaskID": "31760d16-c2e5-4620-9e09-0b236b776cef" } ] } Purchase Order sample of what gets returned https://private-anon-92de43072c-dearinventory.apiary-mock.com/ExternalApi/v2/purchase/order?TaskID=TaskID Purchase Order sample of what gets returned https://private-anon-92de43072c-dearinventory.apiary-mock.com/ExternalApi/v2/purchase/order?TaskID=02b08cd2-51d2-41e6-ab97-85bcd13e7136 { "TaskID": "02b08cd2-51d2-41e6-ab97-85bcd13e7136", "CombineAdditionalCharges": false, "Memo": "", "Status": "AUTHORISED", "Lines": [ { "ProductID": "c08b3876-89cc-46c4-af52-b77f058fdf81", "SKU": "Bread", "Name": "Baked Bread", "Quantity": 2, "Price": 2, "Discount": 0, "Tax": 0, "TaxRule": "Sales Tax on Imports", "SupplierSKU": "", "Comment": "", "Total": 4 }, { "ProductID": "c08b3876-89cc-46c4-af52-b77f058fdf81", "SKU": "Bread", "Name": "Baked Bread", "Quantity": 1, "Price": 2, "Discount": 0, "Tax": 0, "TaxRule": "Sales Tax on Imports", "SupplierSKU": "", "Comment": "", "Total": 2 } ], "AdditionalCharges": [ { "Description": "Half day training - Microsoft Office", "Reference": "", "Price": 3, "Quantity": 1, "Discount": 0, "Tax": 0, "Total": 3, "TaxRule": "Sales Tax on Imports" } ], "TotalBeforeTax": 9, "Tax": 0, "Total": 9 }1.9KViews0likes0CommentsData Management Gateway - High Availability and Scalability Preview
We are excited to announce the preview for https://docs.microsoft.com/azure/data-factory/data-factory-data-management-gateway - High Availability and Scalability. You can now associate multiple on-premise machines to a single logical gateway. The benefits are: Higher availability of Data Management Gateway (DMG) – DMG will no longer be the single point of failure in your Big Data solution or cloud data integration with Azure Data Factory, ensuring continuity with up to 4 nodes. Improved performance and throughput during data movement between on-premises and cloud data stores. Get more information on https://docs.microsoft.com/en-us/azure/data-factory/data-factory-copy-activity-performance. Both Scale out and Scale up support – Not only the DMG can be installed across 4 nodes (scale out), but you can now increase/decrease the concurrent data movement jobs at each node (scale up/down) as per the need. Note: The Scale up/down feature is now available for all existing Single Node (GA) gateways. This update is not limited to this preview. Richer Data Management Gateway Monitoring experience – You can monitor each node status and resource utilization all at one place on the Azure Portal. This helps simplify the DMG management. Read all about it in the https://azure.microsoft.com/en-us/blog/data-management-gateway-high-availability-and-scalability-preview/.1.8KViews0likes0CommentsEmpty File is getting created in ADF
I have a ADF pipeline, which has data flows. The data flows reads excel file and puts the records to a SQL DB. The incorrect records are pushed to a Sink of Blob Storage as CSV File. When all the records are correct and empty. csv file is getting created and pushed to Blob. How can I avoid creation of this empty file.1.7KViews0likes0CommentsAzure Data Factory July new features update
We are glad to announce that Azure Data Factory has added more new features in July, including: Preview for Data Management Gateway high availability and scalability Skipping or logging incompatible rows during copy for fault tolerance Service principal authentication support for Azure Data Lake Analytics We will go through each of these new features one by one in the blog post on the https://azure.microsoft.com/en-us/blog/azure-data-factory-july-new-features-update/.1.6KViews0likes1CommentNew Azure Data Factory self-paced hands-on lab for UI
A few weeks back, we announced the https://azure.microsoft.com/en-us/blog/adf-v2-visual-tools-enabled-in-public-preview/ for Azure Data Factory. We’ve since partnered with https://pragmaticworks.com/, who have been long-time experts in the Microsoft data integration and ETL space, to create a new set of hands on labs that you can now use to http://aka.ms/adflab2. In that repo, you will find data files and scripts in the Deployment folder. There are also lab manual folders for each lab module as well an overview presentation to walk you through the labs. Below you will find more details on each module. Read about it in the https://azure.microsoft.com/en-us/blog/new-azure-data-factory-self-paced-hands-on-lab-for-ui/.1.5KViews0likes0CommentsIterative development and debugging using Data Factory
Data Integration is becoming more and more complex as customer requirements and expectations are continuously changing. There is increasingly a need among users to develop and debug their Extract Transform/Load (ETL) and Extract Load/Transform (ELT) workflows iteratively. Now, https://azure.microsoft.com/en-us/services/data-factory/ (ADF) visual tools allow you to do iterative development and debugging. You can create your pipelines and do test runs using the Debug capability in the pipeline canvas without writing a single line of code. You can view the results of your test runs in the Output window of your pipeline canvas. Once your test run succeeds, you can add more activities to your pipeline and continue debugging in an iterative manner. You can also Cancel your test runs once they are in-progress. You are not required to publish your changes to the data factory service before clicking Debug. This is helpful in scenarios where you want to make sure that the new additions or changes work as expected before you update your data factory workflows in dev, test or prod environments. Read about it in the https://azure.microsoft.com/en-us/blog/iteratively-develop-and-debug-your-etl-elt-workflows-using-data-factory/.1.5KViews0likes0CommentsAzure Data Factory new capabilities are now generally available
Microsoft is excited to announce the General Availability of new Azure Data Factory (ADF V2) features that will make data integration in the cloud easier than ever before. With a new browser-based https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal, you can accelerate your time to production by building and scheduling your data pipelines using drag and drop. Manage and monitor the health of your data integration projects at scale, wherever your data lives, in cloud or on-premises, with enterprise-grade security. ADF comes with support for over 70 https://docs.microsoft.com/en-us/azure/data-factory/data-factory-data-movement-activities and enables you to easily dispatch data transformation jobs at scale to transform raw data into processed data that is ready for consumption by business analysts using their favorite BI tools or custom applications. For existing https://docs.microsoft.com/en-us/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview users, ADF now allows you to easily lift and shift your SSIS packages into the cloud and run SSIS as a service with minimal changes required to your existing packages. ADF will now manage your SSIS resources for you so you can increase productivity and lower total cost of ownership. Meet your security and compliance needs while taking advantage of extensive capabilities and paying only for what you use. Read about it in the https://azure.microsoft.com/en-us/blog/azure-data-factory-new-capabilities-are-now-generally-available/.1.4KViews0likes0CommentsAzure Data Factory offers SAP HANA and Business Warehouse data integration
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data, which support copying data from 25+ data stores on-premises and in the cloud easily and performantly. Today, we are excited to announce that Azure Data Factory newly enables loading data from SAP HANA and SAP Business Warehouse (BW) into various Azure data stores for advanced analytics and reporting, including Azure Blob, Azure Data Lake, Azure SQL DW, etc. SAP is one of the most widely-used enterprise software in the world. We hear you that it’s crucial for Microsoft to empower customers to integrate their existing SAP system with Azure to unblock business insights. Azure Data Factory start the SAP data integration support with SAP HANA and SAP BW, which are the most popular ones in the SAP stack used by enterprise customers. Learn what's new: Read more on the https://azure.microsoft.com/en-us/blog/azure-data-factory-offer-sap-hana-and-business-warehouse-data-integration/1.4KViews0likes0Comments