biztalk migration
19 TopicsAnnouncing: Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy
As AI technologies continue to evolve, they offer businesses a unique opportunity to modernize operations, accelerate innovation, and unlock new growth potential. To stay ahead of the curve, organizations need a comprehensive integration and API strategy that seamlessly connects data, applications, and AI across their entire ecosystem. We’re excited to announce the "Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy" event. Over two action-packed days, you'll gain valuable insights from Azure leaders, industry analysts, and enterprise customers about how Azure Integration Services and Azure API Management are driving efficiency, agility, and fueling business growth in the AI-powered era. Why Attend? From security to development, customer success stories to expert analyst insights, this event will highlight why APIs and integration are critical for success now and in the future. Get exclusive industry insights: Gain expert perspectives from IDC’s Shari Lava, Azure product leaders, and Forrester consultant Andrew Nadler on the latest trends shaping enterprise integration and API strategies. Learn from real-world customer stories: Hear firsthand from organizations like DocuSign, Visa, LyondellBasell, Metcash, Khoj, Brisbane City Council, Moneris, Heineken, Transcard, and CareFirst BlueCross BlueShield on how they are transforming operations with Azure Integration Services and Azure API Management. Accelerate your AI and integration strategy: Learn how Azure Logic Apps make AI-driven automation more accessible than ever, and how Azure API Management empowers businesses to securely scale AI-powered APIs. Event Highlights Day 1: Drive Business Growth with a Modern Integration Platform In today’s competitive landscape, businesses must seamlessly connect data, applications, and AI. On Day 1, you'll explore how Azure Integration Services help organizations break down data silos, unlock real-time insights, and optimize operations. Learn how connected data streams enable smarter, faster decision-making, while AI-powered workflows reduce complexity and drive operational efficiency. We’ll also explore how businesses are modernizing legacy systems by migrating from BizTalk and other on-premises integration solutions to Azure Integration Services, providing greater scalability, agility, and business continuity. Day 2: Power AI and Enterprise Innovation with an API-First Strategy On Day 2, you'll dive deep into how APIs are the backbone of modern digital ecosystems. APIs enable businesses to scale faster, enhance developer experiences, and create new revenue streams. Learn how Azure API Management helps you secure, manage, and monetize APIs while accelerating AI adoption. You’ll also discover best practices for securing and governing APIs across distributed environments, ensuring that your AI-powered ecosystem remains secure, scalable, and compliant. Streamed Live Across Multiple Time Zones Join us no matter where you are! We’re streaming live across multiple time zones, so you can participate at a time that works best for you. US/Canada: Reserve your seat today! Day 1: Tuesday, 29 April 2025 | 9:00 AM – 12:30 PM PDT Day 2: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM PDT Australia/New Zealand: Reserve Your Seat Today! Day 1: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM AEDT Day 2: Thursday, 1 May 2025 | 9:00 AM – 12:30 PM AEDT Europe: Reserve Your Seat Today! Day 1: Tuesday, 29 April 2025 | 9:00am – 12:30pm BST Day 2: Wednesday, 30 April 2025 | 9:00am – 12:30pm BST Ready to Future-Proof Your Integration and API Strategy? Don’t miss this exclusive opportunity to learn from industry experts, Azure leaders, and top enterprises. Discover how to future-proof your integration and API strategy to drive AI-powered growth and business success.1.8KViews1like0CommentsAnnouncement: General Availability of Logic Apps Hybrid Deployment Model
We are thrilled to announce the General Availability of the Logic Apps Hybrid Deployment Model, a groundbreaking feature that offers unparalleled flexibility and control to our customers. This innovative deployment model allows you to run Logic Apps workloads on customer-managed infrastructure, providing you with the option to host your integration solutions on-premises, in a private cloud, or even in a third-party public cloud. With the Logic Apps Hybrid Deployment Model, you can tailor your integration solutions to meet your specific needs, whether it's for regulatory compliance, data privacy, or network restrictions. This model ensures that you have the freedom to choose the best environment for your workflows, while still leveraging the powerful capabilities of Azure Logic Apps. The Hybrid Deployment Model supports a semi-connected architecture, offering local processing of workflows, local storage, and local network access. This means that the data processed by the workflows remains in your local SQL Server, and you have the ability to connect to local networks. Additionally, the built-in connectors will execute in your local compute, giving you access to local data sources and higher throughput. Since we launched the public preview, we have received an overwhelmingly positive response from customers across various industries. Many customers, including those looking to migrate from BizTalk Server, have expressed interest in this offering due to its ability to co-locate integration platforms near key lines of business systems, avoiding dependencies on public internet to process transactions. Journey of the Hybrid Deployment Model Feature At the Integrate 2024 event, we announced the early access preview of the Hybrid Deployment model for Logic Apps Standard. This initial phase allowed interested parties to nominate themselves for early access and provided valuable feedback on the model's functionality and benefits. Following the private preview, we launched the public preview, which empowered our customers with additional flexibility and control. This phase allowed customers to build and deploy workflows on customer-managed infrastructure, offering the option to run Logic Apps on-premises, in a private cloud, or in a third-party public cloud. The public preview also introduced the semi-connected architecture, enabling local processing of workflows and access to local data sources. In October 2024, we refreshed the public preview and received an overwhelmingly positive response from customers across various industries. This feedback highlighted the model's ability to meet specific use cases, such as migrating from BizTalk Server and co-locating integration platforms near key lines of business systems. The public preview refresh also emphasized the model's alignment with our promise of providing customers with more options to meet their business needs. We are excited to see how our customers will leverage the Logic Apps Hybrid Deployment Model to meet their business needs and drive innovation. Thank you for your continued support and feedback. New features in the GA release: Open Telemetry support: Open telemetry is a vendor-neutral open-source Observability framework for instrumenting, generating, collecting, and exporting telemetry data. The support for Open Telemetry in Hybrid deployment model ensures the seamless logging in the semi-connected scenarios and provides the ability to choose any observability platform as a telemetry endpoint. More details here. To set up Open Telemetry capability from Azure portal, follow these steps: Open the host.json in the root directory of SMB file share path configured in your logic app. In the host.json file, at the root level, add the following telemetryMode setting with the OpenTelemetry value, for example: { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "telemetryMode": "OpenTelemetry" } When you enable Open Telemetry in the host.json file, your logic app exports telemetry based on the Open Telemetry-supported app settings that you define in the environment. Add below app settings from portal by navigating to Containers-->Environment variables-->edit and deploy. App setting Description OTEL_EXPORTER_OTLP_ENDPOINT The online transaction processing (OTLP) exporter endpoint URL for where to send the telemetry data. OTEL_EXPORTER_OTLP_HEADERS (optional) A list of headers to apply to all outgoing data. Commonly used to pass authentication keys or tokens to your observability backend. If your Open Telemetry endpoint requires other Open Telemetry related settings, include these settings in the app settings too. Support for Zip deployment through VSCode: The support for Zip deployment in VSCode deployment has enhanced the deployment experience with more reliability. This feature uses Azure Entra authentication for deployment, hence the VSCode machine doesn’t require to have permissions on the SMB share and the user need not to provide SMB credentials in subsequent deployments. To use Zip deployment, follow below steps: create an app registration. In the VSCode deployment, provide Client ID, Object ID and Client secret values. If there are any concerns with creating App registration, you can continue to use SMB deployment option by choosing "Use SMBDeployment For Hybrid" in the Extensions configuration of VSCode If you would like to use zip deployment in an existing Logic App, you will need to manually add the app settings as indicated here. The Zip deployment APIs can be used in CI/CD pipelines as well for DevOps deployment. We will be publishing another blog with detailed steps on the DevOps process. Support for more regions: We are pleased to announce the expansion of our hybrid deployment support to additional regions, in response to valuable customer feedback. This enhancement aims to better meet the diverse geographic and operational requirements of your businesses. The hybrid deployment is now available in the following regions: Central US, East Asia, East US, North Central US, Southeast Asia, Sweden Central, UK South, West Europe, and West US. Logic Apps Rules Engine Support on Linux containers: In this release, we have added support for Azure Logic Apps Rules Engine to run on Linux containers which enables customers to use the Rules Engine capabilities in Hybrid Logic Apps. Improvements for Effective Scaling and Performance: We have introduced few improvements in the runtime storage and the scaling behaviour aimed at improving the performance and achieving effective scaling. Please refer to the following articles: Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard | Microsoft Community Hub Hybrid deployment model for Logic Apps- Performance Analysis and Optimization recommendations | Microsoft Community Hub Diagnostic tool: To assist with troubleshooting the environment configuration issues, we have created a troubleshooting tool, which will help you review the health of all the components of the hybrid deployment and provide insights. You can find the script in our GitHub repository. Select the troubleshoot.ps1 file and copy it to a folder and run the script using PowerShell. This script should be run where you have access to kubectl. References: Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn Set up your own infrastructure for Standard logic app workflows - Azure Logic Apps | Microsoft Learn Set up and view enhanced telemetry for Standard workflows - Azure Logic Apps | Microsoft Learn1.7KViews1like0CommentsAnnouncing the BizTalk Server 2020 Cumulative Update 6
The BizTalk Server product team has released the Cumulative Update 6 for BizTalk Server 2020. The Cumulative Update 6 contains all released functional and security fixes for customer-reported issues for BizTalk Server 2020. Also, CU6 includes support for the following new Microsoft platforms: Microsoft Windows Server 2022 Microsoft SQL Server 2022 Microsoft Windows 11 BizTalk Server 2016 is currently out of support with its end of life in 2027. If you are running BizTalk 2016, or earlier versions of the product, you must upgrade to BizTalk Server 2020 CU6 or strongly consider migrating to Azure Logic Apps. Please fill this survey: https://aka.ms/biztalklogicapps. More Information about the CU6: This cumulative update includes all the product components. However, only those components that are currently installed on the system are updated. This CU6 includes fixes for the following areas: BizTalk Server Adapters Updates WCF-SAP adapter SFTP adapter BizTalk Server Administration Tools and Management APIs Lost changes to SQL Server Agent jobs You can obtain the software from the Microsoft Download Center, at https://aka.ms/BTS2020CU6. For more information about the BizTalk Server 2020 CU6, read the Microsoft Knowledgebase article posted to https://aka.ms/BTS2020CU6KB.886Views3likes1Comment🔁 Public Preview Refresh: More Power to Data Mapper in Azure Logic Apps
We’re back with a Public Preview refresh for the Data Mapper in Azure Logic Apps (Standard) — bringing forward some long-standing capabilities that are now fully supported in the new UX. In our initial announcement, we introduced a redesigned experience focused on usability, error handling, and improved mapping for complex schemas. As we continue evolving the tool, we’re working to bring feature parity with the classic experience, while layering in modern enhancements along the way. With this update, several existing capabilities from the legacy Data Mapper are now available in the new preview version — so you can bring your advanced scenarios forward with confidence. 🛠️ Run XSLT Inside Your Data Map The ability to apply XSLT has long been a powerful feature in Logic Apps, and we’re excited to bring Run XSLT support into the new UX. You can now invoke reusable transformation logic from your map, including: Enterprise-grade XSLT Predefined templates or logic from your BizTalk workflows How to try it out: Create a new data map. Right-click on the MapDefintions or Maps folder and click Create new data map Store the XSLT file under Artifacts -> DataMapper/Extension -> InlineXslt. Open the data map and search for Run XSLT in the functions panel. Select the function and simply select the function you want to run from the dropdown Connect to desired destination node. In my case, the function simply adds a "Placeholder" value for the Name node at destination, alongside an "EmployeeType" node. Note that you do not need to connect any source node to the XSLT function given this is custom XSLT logic that will be applied directly at destination node. Upon testing the map, right value is generated in the destination schema 🔍 Execute XPath to Extract Targeted Values Execute XPath is now supported in the new experience, giving you control to extract specific values from nested XML structures. This function is particularly useful for: Accessing attributes and nested elements Applying logic based on the structure or content of incoming data How to try it out: Search for Execute XPath in the functions panel. Select the function and add the expression you want to extract Map it to destination node. Here is what the map will look like: The test payload correctly creates multiple Address nodes at destination based on the Address node at source. 🧩 Use Custom XML Functions Custom XML functions allow you to define and reuse logic across your map. This helps reduce duplication and supports schema-specific transformations. Now that support is available in the new UX, you can: Wrap complex logic into manageable components Handle schema-specific edge cases with ease How to try it out: Add the .xml function file under Artifacts -> DataMapper/Extension -> Functions Open the data map and under Utility category of functions, select the new function. In our case, the xml function is called Age Connect function input to Date_of_Birth node at source and output to Age node at destination. The map will look something like this Test the map and notice that the age is calculated correctly at the destination node 🌒 Dark Mode Support in VS Code The new UX now respects Dark Mode in VS Code, giving you a visually cohesive and low-contrast authoring experience — perfect for long mapping sessions. No extra steps needed — Dark Mode works automatically based on your VS Code theme settings. ⚙️ How to Enable the New Experience If you haven’t yet tried the new UX: Open your Logic Apps (Standard) project in VS Code Go to Logic Apps (Standard) extension → Settings → Data Mapper Select Version ~2 You’ll find detailed walkthroughs in the initial preview announcement blog. 💬 We’d Love Your Feedback We’re continuously evolving the Data Mapper, and your feedback is key to getting it right — especially as we bring more advanced transformation scenarios into the new experience. 👉 Submit your feedback here 🐛 Found an issue or have a specific feature request? Let us know on GitHub Issues Thanks again for being part of the journey — more updates coming soon! 🚀🚀 General Availability: Enhanced Data Mapper Experience in Logic Apps (Standard)
We’re excited to announce the General Availability (GA) of the redesigned Data Mapper UX in the Azure Logic Apps (Standard) extension for Visual Studio Code. This release marks a major milestone in our journey to modernize and streamline data transformation workflows for integration developer. What's new The new UX, previously available in public preview, is now the default experience in the Logic Apps Standard extension. This GA release reflects direct feedback from our integration developer community. We’ve resolved blockers that we heard from customers and usability issues that impacted performance and stability, including: Opening V1 maps in V2: Seamlessly open and edit existing maps you have already created with latest visual capabilities. Load schemas on Mac: Addressed schema-related crashes on macOS for a smoother experience. Function documentation updates: Improved guidance and examples for built-in collection functions that apply on repeating nodes. Stay connected We would love to hear your feedback. Please use this form link to let us know if there are any missing gaps or scenarios that are not yet coveredTransform Your Integration Strategy with Azure Integration Services
Still on Microsoft BizTalk Server or other legacy integration solutions? If you're relying on BizTalk or other legacy systems, you're already feeling the pain: rising costs, performance bottlenecks, and limited scalability. These outdated systems are holding you back, but the good news? The time to modernize is NOW and we’ve got the event that will show you how. Join us at "Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy", where industry leaders like Visa, LyondellBasell, and Metcash share how they reimagined their integration landscape with Azure. This is your blueprint for moving beyond the limitations of legacy systems and unlocking innovation with a cloud-native, AI-ready approach. By attending, you’ll gain exclusive insights into how leading organizations have turned integration challenges into competitive advantages, positioning themselves for future growth with Azure. Real Stories, Real Impact Visa: Revolutionizing Operations with Azure Logic Apps Visa’s journey from BizTalk to Azure Logic Apps isn’t just a story of modernization—it’s a game-changer. By automating complex workflows and managing over 100 HR systems, Visa reduced manual intervention and slashed infrastructure costs by 95%. With Azure, Visa is now set to scale operations and leverage AI for continued growth. Discover the details of how Visa is preparing for tomorrow’s challenges today. LyondellBasell: Scaling Beyond BizTalk LyondellBasell, a global leader in chemicals, broke free from BizTalk’s limitations. Azure Integration Services empowered them with hybrid connectivity and real-time visibility, enabling them to streamline workflows and boost developer efficiency by 50%. Learn how they unlocked faster decision-making and stronger business outcomes. Brisbane City Council: From Legacy to API-First Agility Brisbane City Council, Australia’s largest local government, faced limitations with legacy integration solution. With Azure, they cut data processing times from 1 hour to under 5 minutes. The shift to an API-first approach future-proofed their strategy. Find out how this local government innovated without disruption. Metcash: Peak Retail Performance, Zero Downtime After moving to Azure Integration Services, Metcash, an Australian wholesaler, processed 8.65 million API calls in 48 hours during a major retail event without a single minute of downtime. With Azure, they achieved unrivaled resilience and scalability, all while slashing costs. See how Metcash’s transformation can serve as your blueprint for success. What You’ll Learn: How to cut costs and modernize your integration landscape Ways to scale with cloud-native solutions and AI-driven automation Strategies to secure every API with enterprise-grade governance Real-world migration paths from BizTalk and other platforms Choose Your Region and Register Now US/Canada: Reserve your seat today! Day 1: Tuesday, 29 April 2025 | 9:00 AM – 12:30 PM PDT Day 2: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM PDT Australia/New Zealand: Reserve Your Seat Today! Day 1: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM AEDT Day 2: Thursday, 1 May 2025 | 9:00 AM – 12:30 PM AEDT Europe: Reserve Your Seat Today! Day 1: Tuesday, 29 April 2025 | 9:00am – 12:30pm BST Day 2: Wednesday, 30 April 2025 | 9:00am – 12:30pm BST668Views0likes0CommentsHybrid deployment model for Logic Apps- Performance Analysis and Optimization recommendations
A few weeks ago, we announced the Public Preview Refresh release of Logic Apps hybrid deployment model that allows customers to run Logic Apps workloads on a customer managed infrastructure. This model provides the flexibility to execute workflows, either on-premises or in any cloud environment, thereby offering enhanced control over the operation of logic apps. By utilizing customer-managed infrastructure, organizations can adhere to regulatory compliance requirements and optimize performance according to their specific needs. As customers consider leveraging hybrid environments, understanding the performance of logic apps under various configurations and scenarios becomes critical. This document offers an in-depth performance evaluation of Azure Logic Apps within a hybrid deployment framework. It examines, several key factors such as CPU and memory allocation and scaling mechanisms, providing valuable insights aimed at maximizing the application’s efficiency and performance. Achieving Optimal Logic Apps Performance in Hybrid Deployments In this section, we will explore the key aspects that affect Logic Apps performance when deployed in a hybrid environment. Factors such as the underlying infrastructure of the Kubernetes environment, SQL configuration and scaling configuration can significantly impact the efficiency of workflows and the overall performance of the applications. The following blog entry provides details of the scaling mechanism of Hybrid deployment model - Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard | Microsoft Community Hub Configure Container Resource allocation: When you create a Logic App, a default value of 0.5 vCPU and 1GiB of memory would be allocated. From the Azure Portal, you can modify this allocation from the Container blade. - Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn Currently, the maximum allocation is set to 2vCPU and 4 GiB memory per app. In the future, there would be a provision made to choose higher allocations. For CPU intense/memory intense processing like custom code executions, select a higher value for these parameters. In the next section, we will be comparing the performance with different values of the CPU and memory allocation. This allocation would impact the billing calculation of the Logic App resource. Refer vCPU calculation for more details on the billing impact. Optimize the node count and size in the Kubernetes cluster. Kubernetes runs application workloads by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. A node pool is a group of nodes that share the same configuration (CPU, Memory, Networking, OS, maximum number of pods, etc.). You can choose the capacity (cores and memory), minimum node count and maximum node count for each node pool of the Kubernetes cluster. We recommend allocating a higher capacity for processing CPU intense, or memory intense applications Configure Scale rule settings: For a Logic App resource, we recommend you configure the maximum and minimum replicas which could be scaled out when a scale event occurs. A higher value for the max replicas helps in sudden spikes in the number of application requests. The interval with which the scaler checks for the scaling event and the cooldown period for the scaling event can also be configured from the Scale blade of Logic Apps resource. These parameters impact the scaling pattern. Optimize the SQL server configuration: The hybrid deployment model uses Microsoft SQL for runtime storage. As such, there are lot of SQL operations performed throughout the execution of the workflow and SQL capacity has a significant impact on the performance of the app. Microsoft SQL server could either be a SQL server on Windows, or an Azure SQL database. Few recommendations on the SQL configuration for better performance: If you are using, Azure SQL database, run it on a SQL elastic pool. If you are using SQL server on Windows, run with at least 4vCPU configuration. Scale out the SQL server once the CPU usage of the SQL server hits 60-70% of the total available CPU. Performance analysis: For this performance analysis exercise, we have used a typical enterprise integration scenario which includes the below components. Data transformation: XSLT transformation, validation, and XML parsing actions Data routing: File system connector for storing the transformed content in a file share. Message queuing: RabbitMQ connector for sending the transformation result to Rabbit MQ queue endpoint. Control operations: For-each loop for looping through multiple records, condition execution, Scope, and error handling blocks. Request response: The XML data transmitted via HTTP request, and the status returned as a response. Summary: For these tests, we used the following environment settings: Kubernetes cluster: AKS cluster with Standard D2sV3 (2vCPU, 8GiBmemory) Max replicas: 20 Cooldown period: 300 seconds Polling interval: 30 With the above environment and settings, we have performed multiple application tests with different configuration of SQL server, resource allocation and test durations using Azure load testing tool. In the following table, we have summarized the response time, throughput, and the total vCPU consumption for each of these configurations. You can check each scenario for detailed information. Configuration Results Scenario SQL CPU and Memory allocation per Logic App Test duration Load 90 th Percentile Response time Throughput Total vCPU consumed Scenario 1 SQL general purpose V2 1vCPU/2GiB Memory 10 minutes with 50 users 503 requests 68.62 seconds 0.84/s 3.42 Scenario 2 SQL Elastic pool-4000DTU 1vCPU/2GiB Memory 10 minutes with 50 users 1004 requests 40.74 seconds 1.65/s 3 Scenario 3 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 10 minutes with 50 users 997 requests 40.63 seconds 1.66/s 4 Scenario 4 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 30 minutes with 50 users 3421 requests 26.6Seconds 1.9/s 18.6 Scenario 5 SQL Elastic pool-4000DTU 0.5vCPU/1GiB Memory 30 minutes with 50 users 3055 requests 31.38 seconds 1.7/s 12.4 Scenario 6 SQL 2022 Enterprise on Standard D4s V3 VM 0.5vCPU/1GiB Memory 30 minutes with 50 users 4105 requests 27.15 seconds 2.28/s 10 Scenario 1: SQL general purpose V2 with 1vCPU and 2 GiB Memory – 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on General purpose V2 plan. There were 503 requests with multiple records in each payload and it achieved the 68.62 seconds as the 90 th percentile response time and a throughput of 0.84 requests per second. Scaling: The Kubernetes nodes scaled out to 12 nodes and in total 3.42 vCPUs used by the app for the test duration. SQL Metrics: The CPU usage of the SQL server reached 90% of CPU usage quite early and stayed above 90% for the remaining duration of the test. From our backend telemetry as well, we observed that the actions executions were faster, but there was latency between the actions, which indicates SQL bottlenecks. Scenario 2: SQL elastic pool, with 1vCPU and 2 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 1004 requests with multiple records in each payload and it achieved the 40.74 seconds as the 90 th percentile response time and a throughput of 1.65 requests per second. Scaling: The Kubernetes nodes scaled out to 15 nodes and in total 3 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 2% of the elastic pool. Scenario 3: SQL elastic pool, with 2vCPU and 4 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 997 requests with multiple records in each payload and it achieved the 40.63 seconds as the 90 th percentile response time and a throughput of 1.66 requests per second. Scaling: The Kubernetes nodes scaled out to 21 nodes and in total 4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 5% of the elastic pool. Scenario 4: SQL elastic pool, with 2vCPU and 4 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3421 requests with multiple records in each payload and it achieved the 26.67 seconds as the 90 th percentile response time and a throughput of 1.90 requests per second. Scaling: The Kubernetes nodes scaled out to 20 nodes and in total 18.6 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 4.7% of the elastic pool. Scenario 5: SQL Elastic pool, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we have conducted a load test for 30 minutes with 50 users with the Logic App configuration of 0.5 vCPU and 1 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3055 requests with multiple records in each payload and it achieved the 31.38 seconds as the 90 th percentile response time and a throughput of 1.70 requests per second. Scaling: The Kubernetes nodes scaled out to 18 nodes and in total 12.4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 8.6% of the elastic pool CPU. Scenario 6: SQL 2022 Enterprise Gen2 on Windows 2022 on Standard D4s v3 image, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 0.5 vCPU and 1 GiB Memory and Azure SQL database running on an on-premises SQL 2022 Enterprise Gen2 version running on a Windows 2022 OS with Standard D4s v3 image (4 vCPU and 16GIB memory) There were 4105 requests with multiple records in each payload and it achieved the 27.15 seconds as the 90 th percentile response time and a throughput of 2.28 requests per second. Scaling: The Kubernetes nodes scaled out to 8 nodes and in total 10 vCPUs used by the app for the test duration. SQL metrics: The CPU usage of the SQL server went above 90% after few minutes and there was latency on few runs. Findings and recommendations: The following are the findings and recommendations for this performance exercise. Consider that this load test was conducted using unique conditions. If you conduct a similar test, the results and findings might vary, depending on factors such as workflow complexity, configuration, resource allocation and network configuration. The KEDA scaler performs the scale-out and scale-in operations faster, as such, while the total vCPU usage remains quite low, though the nodes scaled out in the range of 1-20 nodes. The SQL configuration plays a crucial role in reducing the latency between the action executions. For a satisfactory load test, we recommend starting with at least 4vCPU configuration on SQL server and scale out once CPU usage of the SQL server hits 60-70% of the total available CPU. For critical applications, we recommend having a dedicated SQL database for better performance. Increasing the dedicated vCPU allocation of the Logic App resource is helpful for the SAP connector, Rules Engine, .NET Framework based custom code operations and for the applications with many complex workflows. As a general recommendation, regularly monitor performance metrics and adjust configurations to meet evolving requirements and follow the coding best practices of Logic Apps standard. Consider reviewing the following article, for recommendations to optimize your Azure Logic Apps workloads: https://techcommunity.microsoft.com/blog/integrationsonazureblog/logic-apps-standard-hosting--performance-tips/3956971Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard
Hybrid Logic Apps offer a unique blend of on-premises and cloud capabilities, making them a versatile solution for various integration scenarios. A key feature of hybrid deployment models is their ability to scale efficiently to manage different workloads. This capability enables customers to optimize their compute costs during peak usage by scaling up to handle temporary spikes in demand and then scaling down to reduce costs when the demand decreases. This blog will explore the scaling mechanism in hybrid deployment models, focusing on the role of the KEDA operator and its integration with other components.