biztalk migration
11 TopicsQ1’2025: Azure Integration Services Quarterly Highlights and Insights
From reinventing hybrid integration to unlocking AI-powered productivity and simplifying API management across ecosystems, the first quarter of 2025 was all about making integration smarter, faster, and more accessible for everyone. Whether you're a developer modernizing legacy workflows, an IT pro securing mission-critical APIs, or a business technologist building intelligent automations, Azure Integration Services and Azure API Management are moving at the speed of innovation. Here’s what stood out this quarter and how these updates can help accelerate your next move. Product Announcements We’ve been hard at work delivering powerful new capabilities across Azure Logic Apps and Azure API Management designed to help you move faster, modernize smarter, and innovate with confidence. Here’s what’s new: Azure Logic Apps Hybrid Deployment Model [Public Preview Refresh] The public preview of the Logic Apps hybrid deployment model has been refreshed, allowing you to run Logic Apps on your own infrastructure, whether that’s on-premises, at the edge, or across multi-cloud environments. New highlights include: .NET Framework custom code support on Linux containers – Bring over BizTalk-style transformations without a rewrite. SAP built-in connector now supported on Linux containers – Run SAP workflows anywhere you deploy Hybrid Logic Apps. New Rabbit MQ built-in connector – Seamlessly connect to on-premises message brokers for hybrid queueing scenarios. Read the full update. Data Mapper in Azure Logic Apps (Standard) [Public Preview Refresh] The updated Data Mapper in Logic Apps (Standard) is now even more powerful, bringing forward critical BizTalk-era capabilities with a modern UX: Run enterprise-grade XSLT logic from reusable templates Extract values from complex XML using XPath Build and reuse custom XML functions within your maps Dive into the details. Azure API Center Continues to Evolve Azure API Center continues to evolve as the single source of truth for your organization’s APIs. Two key updates now in public preview: Managed API Center Portal – A turnkey developer portal to explore, customize, and govern APIs with Microsoft Entra integration. Set up instructions for the API Center portal. ️Amazon API Gateway integration – Automatically import and keep APIs in sync from AWS into API Center. No manual updates. No drift. Learn about the integration. Developer tooling: A better Azure API Management experience in VS Code We’ve released a new version of the Azure API Management VS Code extension, focused on making it easier to draft, debug, and explain API policies fueled by GitHub Copilot and AI enhancements. It’s all about helping developers move faster and smarter. Learn more. Your AI and modernization playbooks for Logic Apps Bring AI into your workflows with the Logic Apps AI Playbook Ready to infuse intelligence into your apps, bots, or business processes? Our AI Playbook for Azure Logic Apps gives you a practical guide to building smarter workflows fast. Whether you're starting from scratch or scaling existing automations, you’ll find step-by-step examples to accelerate your journey: Build conversational apps using Azure OpenAI and Azure AI Search Automate document intelligence with built-in connectors to Azure AI Document Intelligence Power real-time retrieval with RAG (Retrieval-Augmented Generation) patterns Create intelligent agents using Semantic Kernel and Logic Apps orchestration Explore AI scenarios and examples. Still running mainframe workloads? We've got a playbook for that too. Modernizing mission-critical systems doesn’t have to mean starting over. With our Mainframe and Midrange Modernization Playbook, Logic Apps helps you replatform legacy workloads with less risk and more flexibility. Discover how to connect to existing systems, orchestrate modern services, and unlock agility without rewriting everything. Explore the mainframe modernization guide. Partnership Announcements Azure API Management comes with a powerful, fully integrated developer portal built to help teams publish APIs, onboard developers, and manage access with enterprise-grade scalability and security. For many organizations, this out-of-the-box portal offers exactly what they need. But some businesses need more advanced branding, tailored onboarding journeys, or completely bespoke developer experience. That’s why we’re partnering with Pronovix and ApiBoost, two industry leaders in custom API portals. These partnerships give customers even more flexibility to extend and personalize their API programs. Customer Success Stories Telefônica Brasil is transforming call center operations for 115M+ users using Azure OpenAI and Azure API Management. With AI-driven insights and secure API access for 33,000 employees, they’ve cut average handling time by 9% and accelerated innovation. Read the story. Intermountain Health has saved thousands of hours by integrating Azure API Management, OpenAI, and Arize AI into their cloud AI infrastructure. Responsible AI deployment meets real-time observability for better patient outcomes. Learn more. Delta Dental of California unified its integration architecture across APIs, messaging, and app delivery using Azure API Management, App Gateway, Redis, and more—streamlining operations and unlocking agility. Explore the case study. NTT Communications uses Azure Logic Apps to automate threat response as part of its Microsoft Sentinel-powered security stack, enabling faster correlation of security signals across systems. Read the story. What’s coming up: Events you won’t want to miss Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy Join us for a dynamic two-day virtual event April 29-30 where we’ll explore how Azure Integration Services and Azure API Management are powering intelligent, connected experiences. Hear directly from Azure product leaders, industry analysts, and enterprise customers as they share how they’re using Azure to scale AI-driven innovation, modernize integrations, and build API-first strategies that unlock new business value. This global event is designed to fit your schedule, with live sessions tailored for key regions: US/Canada: Reserve your seat today! Day 1: Tuesday, 29 April 2025 | 9:00 AM – 12:30 PM PDT Day 2: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM PDT Australia/New Zealand: Reserve Your Seat Today! Day 1: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM AEDT Day 2: Thursday, 1 May 2025 | 9:00 AM – 12:30 PM AEDT Europe: Reserve Your Seat Today! Day 1: Tuesday, 29 April 2025 | 9:00am – 12:30pm BST Day 2: Wednesday, 30 April 2025 | 9:00am – 12:30pm BST Microsoft Build 2025 Whether you're joining us in Seattle or tuning in online, Microsoft Build is your front-row seat to the future of technology. Taking place May 19–22, this year’s conference will showcase groundbreaking innovations across cloud, AI, developer tools, and of course APIs and integration. Session catalog is live. Stay Connected! Q1 2025 set the stage for even bolder moves in the integration space. With hybrid flexibility, next-gen mapping, cross-cloud API management, and AI in every layer, Azure Integration Services and Azure API Management are ready to power what’s next. Stay tuned and subscribe to the Azure Tech Community blog.212Views0likes0CommentsSumming it up: Aggregating repeating nodes in Logic Apps Data Mapper 🧮
Logic Apps Data Mapper makes it easy to define visual, code-free transformations across structured JSON data. One pattern that's both powerful and clean: using built-in collection functions to compute summary values from arrays. This post walks through an end-to-end example: calculating a total from a list of items using just two functions — `Multiply` and `Sum`. 🧾 Scenario: Line Item Totals + Order Summary You’re working with a list of order items. For each item, you want to: Compute Total = Quantity × Price Then, compute the overall OrderTotal by summing all the individual totals 📥 Input { "orders" : [ { "Quantity" : 10, "Price" : 100 }, { "Quantity" : 20, "Price" : 200 }, { "Quantity" : 30, "Price" : 300 } ] } 📤 Output { "orders" : [ { "Quantity" : 10, "Price" : 100, "Total" : 1000 }, { "Quantity" : 20, "Price" : 200, "Total" : 4000 }, { "Quantity" : 30, "Price" : 300, "Total" : 9000 } ], "Summary": { "OrderTotal": 14000 } } 🔧 Step-by-step walkthrough 🗂️ 1. Load schemas in Data Mapper Start in the Azure Data Mapper interface and load: Source schema: contains the orders array with Quantity and Price Target schema: includes a repeating orders node and a Summary → OrderTotal field 📸 Docked schemas in the mapper 🔁 2. Recognize the repeating node The orders array shows a 🔁 icon on <ArrayItem>, marking it as a repeating node. 📸 Repeating node detection 💡 When you connect child fields like Quantity or Price, the mapper auto-applies a loop for you. No manual loop configuration needed. ➗ 3. Multiply Quantity × Price (per item) Drag in a Multiply function and connect: Input 1: Quantity Input 2: Price Now connect the output of Multiply directly to the Total node under Orders node in the destination. This runs once per order item and produces individual totals: [1000, 4000, 9000] 📸 Multiply setup ➕ 4. Aggregate All Totals Using Sum Use the same Multiply function output and pass it into a Sum function. This will combine all the individual totals into one value. Drag and connect: Input 1: multiply(Quantity, Price) Input 2: <ArrayItem> Connect the output of Sum to the destination node Summary → OrderTotal 1000 + 4000 + 9000 = 14000 📸 Sum function ✅ 5. Test the Output Run a test with your sample input by clicking on the Open test panel. Copy/paste the sample data and hit Test. The result should look like this: { "orders": [ { "Quantity": 10, "Price": 100, "Total": 1000 }, { "Quantity": 20, "Price": 200, "Total": 4000 }, { "Quantity": 30, "Price": 300, "Total": 9000 } ], "Summary": { "OrderTotal": 14000 } } 🧠 Why this pattern works 🔁 Repeating to repeating: You’re calculating Total per order 🔂 Repeating to non-repeating: You’re aggregating with Sum into a single node 🧩 No expressions needed — it’s all declarative This structure is perfect for invoices, order summaries, or reporting payloads where both detail and summary values are needed. 📘 What's coming We’re working on official docs to cover: All functions including collection (Join, Direct Access, Filter, etc.) that work on repeating nodes Behavior of functions inside loops Real-world examples like this one 💬 What should we cover next? We’re always looking to surface patterns that matter most to how you build. If there’s a transformation technique, edge case, or integration scenario you’d like to see explored next — drop a comment below and let us know. We’re listening. 🧡 Special thanks to Dave Phelps for collaborating on this scenario and helping shape the walkthrough.Hybrid deployment model for Logic Apps- Performance Analysis and Optimization recommendations
A few weeks ago, we announced the Public Preview Refresh release of Logic Apps hybrid deployment model that allows customers to run Logic Apps workloads on a customer managed infrastructure. This model provides the flexibility to execute workflows, either on-premises or in any cloud environment, thereby offering enhanced control over the operation of logic apps. By utilizing customer-managed infrastructure, organizations can adhere to regulatory compliance requirements and optimize performance according to their specific needs. As customers consider leveraging hybrid environments, understanding the performance of logic apps under various configurations and scenarios becomes critical. This document offers an in-depth performance evaluation of Azure Logic Apps within a hybrid deployment framework. It examines, several key factors such as CPU and memory allocation and scaling mechanisms, providing valuable insights aimed at maximizing the application’s efficiency and performance. Achieving Optimal Logic Apps Performance in Hybrid Deployments In this section, we will explore the key aspects that affect Logic Apps performance when deployed in a hybrid environment. Factors such as the underlying infrastructure of the Kubernetes environment, SQL configuration and scaling configuration can significantly impact the efficiency of workflows and the overall performance of the applications. The following blog entry provides details of the scaling mechanism of Hybrid deployment model - Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard | Microsoft Community Hub Configure Container Resource allocation: When you create a Logic App, a default value of 0.5 vCPU and 1GiB of memory would be allocated. From the Azure Portal, you can modify this allocation from the Container blade. - Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn Currently, the maximum allocation is set to 2vCPU and 4 GiB memory per app. In the future, there would be a provision made to choose higher allocations. For CPU intense/memory intense processing like custom code executions, select a higher value for these parameters. In the next section, we will be comparing the performance with different values of the CPU and memory allocation. This allocation would impact the billing calculation of the Logic App resource. Refer vCPU calculation for more details on the billing impact. Optimize the node count and size in the Kubernetes cluster. Kubernetes runs application workloads by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. A node pool is a group of nodes that share the same configuration (CPU, Memory, Networking, OS, maximum number of pods, etc.). You can choose the capacity (cores and memory), minimum node count and maximum node count for each node pool of the Kubernetes cluster. We recommend allocating a higher capacity for processing CPU intense, or memory intense applications Configure Scale rule settings: For a Logic App resource, we recommend you configure the maximum and minimum replicas which could be scaled out when a scale event occurs. A higher value for the max replicas helps in sudden spikes in the number of application requests. The interval with which the scaler checks for the scaling event and the cooldown period for the scaling event can also be configured from the Scale blade of Logic Apps resource. These parameters impact the scaling pattern. Optimize the SQL server configuration: The hybrid deployment model uses Microsoft SQL for runtime storage. As such, there are lot of SQL operations performed throughout the execution of the workflow and SQL capacity has a significant impact on the performance of the app. Microsoft SQL server could either be a SQL server on Windows, or an Azure SQL database. Few recommendations on the SQL configuration for better performance: If you are using, Azure SQL database, run it on a SQL elastic pool. If you are using SQL server on Windows, run with at least 4vCPU configuration. Scale out the SQL server once the CPU usage of the SQL server hits 60-70% of the total available CPU. Performance analysis: For this performance analysis exercise, we have used a typical enterprise integration scenario which includes the below components. Data transformation: XSLT transformation, validation, and XML parsing actions Data routing: File system connector for storing the transformed content in a file share. Message queuing: RabbitMQ connector for sending the transformation result to Rabbit MQ queue endpoint. Control operations: For-each loop for looping through multiple records, condition execution, Scope, and error handling blocks. Request response: The XML data transmitted via HTTP request, and the status returned as a response. Summary: For these tests, we used the following environment settings: Kubernetes cluster: AKS cluster with Standard D2sV3 (2vCPU, 8GiBmemory) Max replicas: 20 Cooldown period: 300 seconds Polling interval: 30 With the above environment and settings, we have performed multiple application tests with different configuration of SQL server, resource allocation and test durations using Azure load testing tool. In the following table, we have summarized the response time, throughput, and the total vCPU consumption for each of these configurations. You can check each scenario for detailed information. Configuration Results Scenario SQL CPU and Memory allocation per Logic App Test duration Load 90 th Percentile Response time Throughput Total vCPU consumed Scenario 1 SQL general purpose V2 1vCPU/2GiB Memory 10 minutes with 50 users 503 requests 68.62 seconds 0.84/s 3.42 Scenario 2 SQL Elastic pool-4000DTU 1vCPU/2GiB Memory 10 minutes with 50 users 1004 requests 40.74 seconds 1.65/s 3 Scenario 3 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 10 minutes with 50 users 997 requests 40.63 seconds 1.66/s 4 Scenario 4 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 30 minutes with 50 users 3421 requests 26.6Seconds 1.9/s 18.6 Scenario 5 SQL Elastic pool-4000DTU 0.5vCPU/1GiB Memory 30 minutes with 50 users 3055 requests 31.38 seconds 1.7/s 12.4 Scenario 6 SQL 2022 Enterprise on Standard D4s V3 VM 0.5vCPU/1GiB Memory 30 minutes with 50 users 4105 requests 27.15 seconds 2.28/s 10 Scenario 1: SQL general purpose V2 with 1vCPU and 2 GiB Memory – 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on General purpose V2 plan. There were 503 requests with multiple records in each payload and it achieved the 68.62 seconds as the 90 th percentile response time and a throughput of 0.84 requests per second. Scaling: The Kubernetes nodes scaled out to 12 nodes and in total 3.42 vCPUs used by the app for the test duration. SQL Metrics: The CPU usage of the SQL server reached 90% of CPU usage quite early and stayed above 90% for the remaining duration of the test. From our backend telemetry as well, we observed that the actions executions were faster, but there was latency between the actions, which indicates SQL bottlenecks. Scenario 2: SQL elastic pool, with 1vCPU and 2 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 1004 requests with multiple records in each payload and it achieved the 40.74 seconds as the 90 th percentile response time and a throughput of 1.65 requests per second. Scaling: The Kubernetes nodes scaled out to 15 nodes and in total 3 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 2% of the elastic pool. Scenario 3: SQL elastic pool, with 2vCPU and 4 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 997 requests with multiple records in each payload and it achieved the 40.63 seconds as the 90 th percentile response time and a throughput of 1.66 requests per second. Scaling: The Kubernetes nodes scaled out to 21 nodes and in total 4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 5% of the elastic pool. Scenario 4: SQL elastic pool, with 2vCPU and 4 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3421 requests with multiple records in each payload and it achieved the 26.67 seconds as the 90 th percentile response time and a throughput of 1.90 requests per second. Scaling: The Kubernetes nodes scaled out to 20 nodes and in total 18.6 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 4.7% of the elastic pool. Scenario 5: SQL Elastic pool, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we have conducted a load test for 30 minutes with 50 users with the Logic App configuration of 0.5 vCPU and 1 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3055 requests with multiple records in each payload and it achieved the 31.38 seconds as the 90 th percentile response time and a throughput of 1.70 requests per second. Scaling: The Kubernetes nodes scaled out to 18 nodes and in total 12.4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 8.6% of the elastic pool CPU. Scenario 6: SQL 2022 Enterprise Gen2 on Windows 2022 on Standard D4s v3 image, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 0.5 vCPU and 1 GiB Memory and Azure SQL database running on an on-premises SQL 2022 Enterprise Gen2 version running on a Windows 2022 OS with Standard D4s v3 image (4 vCPU and 16GIB memory) There were 4105 requests with multiple records in each payload and it achieved the 27.15 seconds as the 90 th percentile response time and a throughput of 2.28 requests per second. Scaling: The Kubernetes nodes scaled out to 8 nodes and in total 10 vCPUs used by the app for the test duration. SQL metrics: The CPU usage of the SQL server went above 90% after few minutes and there was latency on few runs. Findings and recommendations: The following are the findings and recommendations for this performance exercise. Consider that this load test was conducted using unique conditions. If you conduct a similar test, the results and findings might vary, depending on factors such as workflow complexity, configuration, resource allocation and network configuration. The KEDA scaler performs the scale-out and scale-in operations faster, as such, while the total vCPU usage remains quite low, though the nodes scaled out in the range of 1-20 nodes. The SQL configuration plays a crucial role in reducing the latency between the action executions. For a satisfactory load test, we recommend starting with at least 4vCPU configuration on SQL server and scale out once CPU usage of the SQL server hits 60-70% of the total available CPU. For critical applications, we recommend having a dedicated SQL database for better performance. Increasing the dedicated vCPU allocation of the Logic App resource is helpful for the SAP connector, Rules Engine, .NET Framework based custom code operations and for the applications with many complex workflows. As a general recommendation, regularly monitor performance metrics and adjust configurations to meet evolving requirements and follow the coding best practices of Logic Apps standard. Consider reviewing the following article, for recommendations to optimize your Azure Logic Apps workloads: https://techcommunity.microsoft.com/blog/integrationsonazureblog/logic-apps-standard-hosting--performance-tips/3956971🔁 Public Preview Refresh: More Power to Data Mapper in Azure Logic Apps
We’re back with a Public Preview refresh for the Data Mapper in Azure Logic Apps (Standard) — bringing forward some long-standing capabilities that are now fully supported in the new UX. In our initial announcement, we introduced a redesigned experience focused on usability, error handling, and improved mapping for complex schemas. As we continue evolving the tool, we’re working to bring feature parity with the classic experience, while layering in modern enhancements along the way. With this update, several existing capabilities from the legacy Data Mapper are now available in the new preview version — so you can bring your advanced scenarios forward with confidence. 🛠️ Run XSLT Inside Your Data Map The ability to apply XSLT has long been a powerful feature in Logic Apps, and we’re excited to bring Run XSLT support into the new UX. You can now invoke reusable transformation logic from your map, including: Enterprise-grade XSLT Predefined templates or logic from your BizTalk workflows How to try it out: Create a new data map. Right-click on the MapDefintions or Maps folder and click Create new data map Store the XSLT file under Artifacts -> DataMapper/Extension -> InlineXslt. Open the data map and search for Run XSLT in the functions panel. Select the function and simply select the function you want to run from the dropdown Connect to desired destination node. In my case, the function simply adds a "Placeholder" value for the Name node at destination, alongside an "EmployeeType" node. Note that you do not need to connect any source node to the XSLT function given this is custom XSLT logic that will be applied directly at destination node. Upon testing the map, right value is generated in the destination schema 🔍 Execute XPath to Extract Targeted Values Execute XPath is now supported in the new experience, giving you control to extract specific values from nested XML structures. This function is particularly useful for: Accessing attributes and nested elements Applying logic based on the structure or content of incoming data How to try it out: Search for Execute XPath in the functions panel. Select the function and add the expression you want to extract Map it to destination node. Here is what the map will look like: The test payload correctly creates multiple Address nodes at destination based on the Address node at source. 🧩 Use Custom XML Functions Custom XML functions allow you to define and reuse logic across your map. This helps reduce duplication and supports schema-specific transformations. Now that support is available in the new UX, you can: Wrap complex logic into manageable components Handle schema-specific edge cases with ease How to try it out: Add the .xml function file under Artifacts -> DataMapper/Extension -> Functions Open the data map and under Utility category of functions, select the new function. In our case, the xml function is called Age Connect function input to Date_of_Birth node at source and output to Age node at destination. The map will look something like this Test the map and notice that the age is calculated correctly at the destination node 🌒 Dark Mode Support in VS Code The new UX now respects Dark Mode in VS Code, giving you a visually cohesive and low-contrast authoring experience — perfect for long mapping sessions. No extra steps needed — Dark Mode works automatically based on your VS Code theme settings. ⚙️ How to Enable the New Experience If you haven’t yet tried the new UX: Open your Logic Apps (Standard) project in VS Code Go to Logic Apps (Standard) extension → Settings → Data Mapper Select Version ~2 You’ll find detailed walkthroughs in the initial preview announcement blog. 💬 We’d Love Your Feedback We’re continuously evolving the Data Mapper, and your feedback is key to getting it right — especially as we bring more advanced transformation scenarios into the new experience. 👉 Submit your feedback here 🐛 Found an issue or have a specific feature request? Let us know on GitHub Issues Thanks again for being part of the journey — more updates coming soon! 🚀Announcing: Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy
As AI technologies continue to evolve, they offer businesses a unique opportunity to modernize operations, accelerate innovation, and unlock new growth potential. To stay ahead of the curve, organizations need a comprehensive integration and API strategy that seamlessly connects data, applications, and AI across their entire ecosystem. We’re excited to announce the "Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy" event. Over two action-packed days, you'll gain valuable insights from Azure leaders, industry analysts, and enterprise customers about how Azure Integration Services and Azure API Management are driving efficiency, agility, and fueling business growth in the AI-powered era. Why Attend? From security to development, customer success stories to expert analyst insights, this event will highlight why APIs and integration are critical for success now and in the future. Get exclusive industry insights: Gain expert perspectives from IDC’s Shari Lava, Azure product leaders, and Forrester consultant Andrew Nadler on the latest trends shaping enterprise integration and API strategies. Learn from real-world customer stories: Hear firsthand from organizations like DocuSign, Visa, LyondellBasell, Metcash, Khoj, Brisbane City Council, Moneris, Heineken, Transcard, and CareFirst BlueCross BlueShield on how they are transforming operations with Azure Integration Services and Azure API Management. Accelerate your AI and integration strategy: Learn how Azure Logic Apps make AI-driven automation more accessible than ever, and how Azure API Management empowers businesses to securely scale AI-powered APIs. Event Highlights Day 1: Drive Business Growth with a Modern Integration Platform In today’s competitive landscape, businesses must seamlessly connect data, applications, and AI. On Day 1, you'll explore how Azure Integration Services help organizations break down data silos, unlock real-time insights, and optimize operations. Learn how connected data streams enable smarter, faster decision-making, while AI-powered workflows reduce complexity and drive operational efficiency. We’ll also explore how businesses are modernizing legacy systems by migrating from BizTalk and other on-premises integration solutions to Azure Integration Services, providing greater scalability, agility, and business continuity. Day 2: Power AI and Enterprise Innovation with an API-First Strategy On Day 2, you'll dive deep into how APIs are the backbone of modern digital ecosystems. APIs enable businesses to scale faster, enhance developer experiences, and create new revenue streams. Learn how Azure API Management helps you secure, manage, and monetize APIs while accelerating AI adoption. You’ll also discover best practices for securing and governing APIs across distributed environments, ensuring that your AI-powered ecosystem remains secure, scalable, and compliant. Streamed Live Across Multiple Time Zones Join us no matter where you are! We’re streaming live across multiple time zones, so you can participate at a time that works best for you. US/Canada: Reserve your seat today! Day 1: Tuesday, 29 April 2025 | 9:00 AM – 12:30 PM PDT Day 2: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM PDT Australia/New Zealand: Reserve Your Seat Today! Day 1: Wednesday, 30 April 2025 | 9:00 AM – 12:30 PM AEDT Day 2: Thursday, 1 May 2025 | 9:00 AM – 12:30 PM AEDT Europe: Reserve Your Seat Today! Day 1: Tuesday, 29 April 2025 | 9:00am – 12:30pm BST Day 2: Wednesday, 30 April 2025 | 9:00am – 12:30pm BST Ready to Future-Proof Your Integration and API Strategy? Don’t miss this exclusive opportunity to learn from industry experts, Azure leaders, and top enterprises. Discover how to future-proof your integration and API strategy to drive AI-powered growth and business success.1.2KViews1like0CommentsUsing SMB storage with Hybrid Logic Apps
Logic apps standard uses azure storage account to store artifact files such as host.json, connection.js etc,. With Hybrid environment in picture, access of azure storage account always cannot be guarenteed. And in any scenario, we can never assume that access internet will be available so assuming access to azure will be a long shot. To tackle this problem, in hybrid logic apps, we are using the SMB protocol to store artifact files. What is SMB protocol ? The SMB protocol is a network file sharing protocol that allows applications on a computer to read and write to files. SMB also requests services from server programs in a computer network. The SMB protocol can be used on top of its TCP/IP protocol or other network protocols. Using the SMB protocol, an application (or the user of an application) can access files or other resources at a remote server. This protocol allows applications to read, create, and update files on the remote server. SMB can also communicate with any server program that is set up to receive an SMB client request. How does SMB come in picture ? Azure Hybrid Logic Apps are running on top of Azure Arc Clusters using Azure Connected environments, connecting your on-premise kubernetes solution to azure. Azure Connected environments have native support for creating storages using SMB protocol which Hybrid Logic Apps is using to connect to a remote storage solution provided by user to store their Logic Apps workflow related files. Therefore, if user is having a SMB server, one popular option is using Azure Storage account's file share, user will be able to use the file share to store the workflow related files. Quick Start Examples This section will demonstrate creating a SMB storage and using it as part of creating new Hybrid Logic App. For this instance, and ease of tutorial I am using the Azure File Share as the SMB server. It's up to the user to create any SMB server be it locally using Window's network sharing tab to configure a folder as SMB server or use something like this tutorial's azure file share as the SMB server. Step 1: Create a storage account on azure with file sharing capabilities. If you are new to Azure, basically just create a simple storage account without any special option enabled. File Share is available even in the most basic configuration (standard tier) of Azure Storage account. Step 2: Navigate to the azure storage account and you should be able to see the File Share option in the navigation band on the left ribbon under Data Storage section. Step 3: Use the create file share option to create a new file share. Refer the picture. That's pretty much it! The file share should be ready to act as SMB server. How to use this information while creating Hybrid Logic App ? When creating a hybrid logic app, the storage section should look something like this. The file share name would be of format <storageAccountName>.file.core.windows.net after replacing <storageAccountName> with your storage account name without the angular brackets. File Share Path should be the name of file share you created to be used for SMB storage. In our example it would be sampledemofileshare. Username should again be the storage account name. Password should be the access key of your storage account. Et voila! This should be all you need to do to create a working SMB server using Azure File Share. Stay tuned there is another blog coming soon for using Linux or Windows as SMB server and guidelines associated with them.Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard
Hybrid Logic Apps offer a unique blend of on-premises and cloud capabilities, making them a versatile solution for various integration scenarios. A key feature of hybrid deployment models is their ability to scale efficiently to manage different workloads. This capability enables customers to optimize their compute costs during peak usage by scaling up to handle temporary spikes in demand and then scaling down to reduce costs when the demand decreases. This blog will explore the scaling mechanism in hybrid deployment models, focusing on the role of the KEDA operator and its integration with other components.Announcing the BizTalk Server 2020 Cumulative Update 6
The BizTalk Server product team has released the Cumulative Update 6 for BizTalk Server 2020. The Cumulative Update 6 contains all released functional and security fixes for customer-reported issues for BizTalk Server 2020. Also, CU6 includes support for the following new Microsoft platforms: Microsoft Windows Server 2022 Microsoft SQL Server 2022 Microsoft Windows 11 BizTalk Server 2016 is currently out of support with its end of life in 2027. If you are running BizTalk 2016, or earlier versions of the product, you must upgrade to BizTalk Server 2020 CU6 or strongly consider migrating to Azure Logic Apps. Please fill this survey: https://aka.ms/biztalklogicapps. More Information about the CU6: This cumulative update includes all the product components. However, only those components that are currently installed on the system are updated. This CU6 includes fixes for the following areas: BizTalk Server Adapters Updates WCF-SAP adapter SFTP adapter BizTalk Server Administration Tools and Management APIs Lost changes to SQL Server Agent jobs You can obtain the software from the Microsoft Download Center, at https://aka.ms/BTS2020CU6. For more information about the BizTalk Server 2020 CU6, read the Microsoft Knowledgebase article posted to https://aka.ms/BTS2020CU6KB.733Views3likes1Comment