logic apps standard
83 Topics🤖 AI Procurement assistant using prompt templates in Standard Logic Apps
📘 Introduction Answering procurement-related questions doesn't have to be a manual process. With the new Chat Completions using Prompt Template action in Logic Apps (Standard), you can build an AI-powered assistant that understands context, reads structured data, and responds like a knowledgeable teammate. 🏢 Scenario: AI assistant for IT procurement Imagine an employee wants to know: "When did we last order laptops for new hires in IT?" Instead of forwarding this to the procurement team, a Logic App can: Accept the question Look up catalog details and past orders Pass all the info to a prompt template Generate a polished, AI-powered response 🧠 What Are Prompt Templates? Prompt Templates are reusable text templates that use Jinja2 syntax to dynamically inject data at runtime. In Logic Apps, this means you can: Define a prompt with placeholders like {{ customer.orders }} Automatically populate it with outputs from earlier actions Generate consistent, structured prompts with minimal effort ✨ Benefits of Using Prompt Templates in Logic Apps Consistency: Centralized prompt logic instead of embedding prompt strings in each action. Reusability: Easily apply the same prompt across multiple workflows. Maintainability: Tweak prompt logic in one place without editing the entire flow. Dynamic control: Logic Apps inputs (e.g., values from a form, database, or API) flow right into the template. This allows you to create powerful, adaptable AI-driven flows without duplicating effort — making it perfect for scalable enterprise automation. 💡 Try it Yourself Grab the sample prompt template and sample inputs from our GitHub repo and follow along. 👉 Sample logic app 🧰 Prerequisites To get started, make sure you have: A Logic App (Standard) resource in Azure An Azure OpenAI resource with a deployed GPT model (e.g., GPT-3.5 or GPT-4) 💡 You’ll configure your OpenAI API connection during the workflow setup. 🔧 Build the Logic App workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Step 0: Start by creating a Stateful Workflow in your Logic App (Standard) resource. Choose "Stateful" when prompted during workflow creation. This allows the run history and variables to be preserved for testing. 📸 Creating a new Stateful Logic App (Standard) workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Trigger: "When an HTTP request is received" 📌 Step 1: Add three Compose actions to store your test data. documents: This stores your internal product catalog entries [ { "id": "1", "title": "Dell Latitude 5540 Laptop", "content": "Intel i7, 16GB RAM, 512GB SSD, standard issue for IT new hire onboarding" }, { "id": "2", "title": "Docking Station", "content": "Dell WD19S docking stations for dual monitor setup" } ] 📸 Compose action for documents input question: This holds the employee’s natural language question. [ { "role": "user", "content": "When did we last order laptops for new hires in IT?" } ] 📸 Compose action for question input customer: This includes employee profile and past procurement orders { "firstName": "Alex", "lastName": "Taylor", "department": "IT", "employeeId": "E12345", "orders": [ { "name": "Dell Latitude 5540 Laptop", "description": "Ordered 15 units for Q1 IT onboarding", "date": "2024/02/20" }, { "name": "Docking Station", "description": "Bulk purchase of 20 Dell WD19S docking stations", "date": "2024/01/10" } ] } 📸 Compose action for customer input 📌 Step 2: Add the "Chat Completions using Prompt Template" action 📸 OpenAI connector view 💡Tip: Always prefer the in-app connector (built-in) over the managed version when choosing the Azure OpenAI operation. Built-in connectors allow better control over authentication and reduce latency by running natively inside the Logic App runtime. 📌 Step 3: Connect to Azure OpenAI Navigate to your Azure OpenAI resource and click on Keys and Endpoint for connecting using key-based authentication 📸 Create Azure OpenAI connection 📝 Prompt template: Building the message for chat completions Once you've added the Get chat completions using Prompt Template action, here's how to set it up: 1. Deployment Identifier Enter the name of your deployed Azure OpenAI model here (e.g., gpt-4o). 📌 This should match exactly with what you configured in your Azure OpenAI resource. 2. Prompt Template This is the structured instruction that the model will use. Here’s the full template used in the action — note that the variable names exactly match the Compose action names in your Logic App: documents, question, and customer. system: You are an AI assistant for Contoso's internal procurement team. You help employees get quick answers about previous orders and product catalog details. Be brief, professional, and use markdown formatting when appropriate. Include the employee’s name in your response for a personal touch. # Product Catalog Use this documentation to guide your response. Include specific item names and any relevant descriptions. {% for item in documents %} Catalog Item ID: {{item.id}} Name: {{item.title}} Description: {{item.content}} {% endfor %} # Order History Here is the employee's procurement history to use as context when answering their question. {% for item in customer.orders %} Order Item: {{item.name}} Details: {{item.description}} — Ordered on {{item.date}} {% endfor %} # Employee Info Name: {{customer.firstName}} {{customer.lastName}} Department: {{customer.department}} Employee ID: {{customer.employeeId}} # Question The employee has asked the following: {% for item in question %} {{item.role}}: {{item.content}} {% endfor %} Based on the product documentation and order history above, please provide a concise and helpful answer to their question. Do not fabricate information beyond the provided inputs. 📸 Prompt template action view 3. Add your prompt template variables Scroll down to Advanced parameters → switch the dropdown to Prompt Template Variable. Then: Add a new item for each Compose action and reference it dynamically from previous outputs: documents question customer 📸 Prompt template variable references 🔍 How the template works Template element What it does {{ customer.firstName }} {{ customer.lastName }} Displays employee name {{ customer.department }} Adds department context {{ question[0].content }} Injects the user’s question from the Compose action named question {% for doc in documents %} Loops through catalog data from the Compose action named documents {% for order in customer.orders %} Loops through employee’s order history from customer Each of these values is dynamically pulled from your Logic App Compose actions — no code, no external services needed. You can apply the exact same approach to reference data from any connector, like a SharePoint list, SQL row, email body, or even AI Search results. Just map those outputs into the Prompt Template and let Logic Apps do the rest. ✅ Final Output When you run the flow, the model might respond with something like: "The last order for Dell Latitude 5540 laptops was placed on February 20, 2024 — 15 units were procured for IT new hire onboarding." This is based entirely on the structured context passed in through your Logic App — no extra fine-tuning required. 📸 Output from run history 💬 Feedback Let us know what other kinds of demos and content you would like to see using this formConcurrency support for Service Bus built-in connector in Logic Apps Standard
In this post, we'll cover the recent enhancements in the built-on or InApp Service Bus connector in Logic Apps Standard. Specifically, we'll cover the support for concurrency for Service Bus trigger...6.6KViews0likes16CommentsDemystifying Logic App Standard workflow deployments
Automated Deployments There is currently not much quality information on how to deploy workflows to Logic Apps Standard. Logic Apps Standard is built on the Azure Functions runtime and unlike Logic Apps Consumption, there can be any number of workflows for one Logic App Standard. A comparison of these two versions is here. Like Azure Functions, the best deployment approach for these is to: Deploy the Logic App Standard infrastructure Deploy each workflow separately from the infrastructure as the rate of change of these is much higher than the infrastructure. For a Logic App that has multiple workflows, you can make the decision to deploy all of the workflows in one go or each workflow separately. This is very much akin to Azure Functions an aligns with a (micro) services approach as opposed to a monolith deployment approach. Steps to Deployment In general, deployment is done in two halves: Infrastructure Workflows Apart from the processes being different, the main reason for this separation is that usually workflow deployments - like any code deployment tend to happen at a higher frequency than infrastructure deployments. Infrastructure Deployment There are many routes to this: CLI Developer CLI (azd) ARM template Bicep Terraform. It is best to align this with the overall approach that is other use on adjacent projects or organisational standards for most skills reuse. This is how a Logic Apps Standard may be deployed using Bicep There are broadly-speaking 4 main components: the Logic App itself the server farm on which the Logic App is deployed (this may be shared) A storage account for state management Optionally Application Insights Some sample bicep the server farm example resource serverfarms_ASP_towerhamletsrg_97bc_name_resource 'Microsoft.Web/serverfarms@2024-04-01' = { name: serverfarms_ASP_towerhamletsrg_97bc_name location: 'UK South' sku: { name: 'WS1' tier: 'WorkflowStandard' size: 'WS1' family: 'WS' capacity: 1 } kind: 'elastic' properties: { perSiteScaling: false elasticScaleEnabled: true maximumElasticWorkerCount: 20 isSpot: false reserved: false isXenon: false hyperV: false targetWorkerCount: 0 targetWorkerSizeId: 0 zoneRedundant: false } } the Logic App resource sites_jjblankthree_name_resource 'Microsoft.Web/sites@2024-04-01' = { name: sites_jjblankthree_name location: 'UK South' kind: 'functionapp,workflowapp' identity: { type: 'SystemAssigned' } properties: { enabled: true hostNameSslStates: [ { name: '${sites_jjblankthree_name}.azurewebsites.net' sslState: 'Disabled' hostType: 'Standard' } { name: '${sites_jjblankthree_name}.scm.azurewebsites.net' sslState: 'Disabled' hostType: 'Repository' } ] serverFarmId: serverfarms_ASP_towerhamletsrg_97bc_externalid reserved: false isXenon: false hyperV: false dnsConfiguration: {} vnetRouteAllEnabled: false vnetImagePullEnabled: false vnetContentShareEnabled: false siteConfig: { numberOfWorkers: 1 acrUseManagedIdentityCreds: false alwaysOn: false http20Enabled: false functionAppScaleLimit: 0 minimumElasticInstanceCount: 1 } scmSiteAlsoStopped: false clientAffinityEnabled: false clientCertEnabled: false clientCertMode: 'Required' hostNamesDisabled: false ipMode: 'IPv4' vnetBackupRestoreEnabled: false customDomainVerificationId: '7C6761218AA3FF18AC28235A2C8C88C2CB16915F3086640269C352A8D052E5CF' containerSize: 1536 dailyMemoryTimeQuota: 0 httpsOnly: true endToEndEncryptionEnabled: false redundancyMode: 'None' publicNetworkAccess: 'Enabled' storageAccountRequired: false keyVaultReferenceIdentity: 'SystemAssigned' } } storage for state management: resource logicAppStorage 'Microsoft.Storage/storageAccounts@2023-01-01' = { name: logicAppStorageName location: location kind: 'StorageV2' sku: { name: storageAccountSku } properties: { allowBlobPublicAccess: false accessTier: 'Hot' supportsHttpsTrafficOnly: true minimumTlsVersion: 'TLS1_2' } } Application Insights resource applicationInsightsLogicApp 'Microsoft.Insights/components@2020-02-02' = { name: 'appinss-${key}-${env}' location: location kind: 'web' properties: { Application_Type: 'web' Flow_Type: 'Bluefield' publicNetworkAccessForIngestion: 'Enabled' publicNetworkAccessForQuery: 'Enabled' Request_Source: 'rest' RetentionInDays: 30 WorkspaceResourceId: logAnalyticsWorkspacelogicApp.id } } Workflow Deployment As the Logic Apps run under the Azure Functions runtime, the Azure Functions deployment mechanisms may be used. These mechanisms are traditionally used for code deployments, but can also deploy the workflow definitions. What needs to be deployed There are three or four components to a Logic App definition: hosts.json connections.json workflow.json parameters.json (this is optional) The hosts file is pretty standard and just defines the runtime and versions that the logic apps run under. The connections.json contains references to the connections to other services that any of the workflows may need. These work alongside some App Settings which will actually define connection strings to these remote services. For a deployed workflow to work, all of the connections used need to be defined and these connections span all of the workflows for that Logic App. The parameters.json file is a list of parameters that may be referenced in the workflow definition. If there are parameters in any of the workflows, then this file needs to be deployed. Each workflow itself needs to be defined in its own workflow.json file. Therefore if there are multiple workflows to be deployed, then each of these needs to be in its own folder and each named workflow.json. How to deploy There is already a GitHub Action zip deploy for Azure Functions. This can be used for Logic Apps Standard workflow deployments. In order to use this, a zip file needs to be created that conforms to a standard structure: Steps in GitHub Action Firstly use an action to copy the files to a known place output for the zip deploy step: - name: Create project folder run: | mkdir output cp 'logicapps/host.json' 'output/' cp 'logicapps/connections.json' 'output/' cp -r 'workflows/copy-blob' 'output/' cp -r 'workflows/move-blobs' 'output/' Then the action to zip the above structure needs to be performed: - name: Easy Zip Files uses: papeloto/action-zip@v1 with: dest: '${{ github.run_id }}.zip' files: output/ this zip file is then used to deploy to the logic app - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1 id: fa with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: '${{ github.run_id }}.zip' App Settings may be created in a couple of ways: - name: Create app settings uses: azure/appservice-settings@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} app-settings-json: '${{ secrets.APP_SETTINGS }}' In the above, all of the App Settings that relate to the connections as a piece of JSON are put into a GitHub secret called APP_SETTINGS. These are then deployed in one go. An alternative approach is to put each individual secret as its own secret in GitHub and pass these into the app service like below: - name: Set Function App secret run: | az functionapp config appsettings set \ --name ${{ env.AZURE_FUNCTIONAPP_NAME }} \ --resource-group ${{ env.AZURE_RESOURCER_GROUP_NAME }} \ --settings <SettingName>=${{ secrets.<SecretName> }} In the above a specific setting is set from a specific secret. A combination of the two may be done, which is to set most of the settings that do not contain secrets from a configuration file (settings.json) and then separately set the secrets using the above method. - name: Set Function App settings from file run: | az functionapp config appsettings set \ --name ${{ env.AZURE_FUNCTIONAPP_NAME }} \ --resource-group ${{ env.AZURE_RESOURCER_GROUP_NAME }} \ --settings .json Deployed Logic App Workflows Once this has been run, the Logic App should have these workflows present. These reference connections: which references an App Setting Understanding the relationship between workflows, connections and App Settings In a Logic App Standard, connections are common across all workflows. Each workflow then references connections as needed. Here is the JSON definition of a connection { "serviceProviderConnections": { "AzureBlob": { "parameterValues": { "connectionString": "@appsetting('AzureBlob_connectionString')" }, "parameterSetName": "connectionString", "serviceProvider": { "id": "/serviceProviders/AzureBlob" }, "displayName": "source-container" } }, "managedApiConnections": {} } As can be seen from the above, the connection string is not kept in the connections.json, but references an App Setting and it is this App Setting that contains the connection string. So, in deployment terms any secrets will only be in App Settings. An extended approach would be to use App Service Key Vault references .KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret) In this manner, secrets will be in key vault and the Logic App Standard runtime will fetch these values from key vault. It is important to remember that for this to work, the Logic App must have a managed identity and a role assignment that allows the Logic App to read key vault secrets. Debugging Deployments There are quite a few ways in which this can go wrong. Here are some hints: check the Action run and that all steps are executing and the run is green. Check the logic app to see if the workflows and connections are present Check that there is an App Setting per connection - as it is this that holds the connection information Check Kudu to see if you see the file structure you are expecting 5. If secrets are used and these reference key vault, then the portal experience will show if the Logic App can retrieve the named secret. A green tick will show a valid reference, otherwise it will be coloured red and you may need to follow troubleshooting key vault references Sample Code There is a GitHub repository that explains this in detail with a worked example of a Logic App Standard and its deployment using a GitHub Action. Summary and Take-Aways Logic App Standard allows separation of infrastructure from workflow deployments. Automated workflow deployment uses the Azure Functions "zip deploy" GitHub Action to deploy workflows. The zip file that needs to be created as part of the build process has a very specific structure that needs to be followed for the workflows to be deployed correctly. Configuration settings and secrets end up in the App Settings part of Environment Variables menu in the portal experience of Logic App Standard. These settings can be deployed all as one large JSON secret or individually. These secrets can even be key vault references to improve security - with some extra configuration.Summing it up: Aggregating repeating nodes in Logic Apps Data Mapper 🧮
Logic Apps Data Mapper makes it easy to define visual, code-free transformations across structured JSON data. One pattern that's both powerful and clean: using built-in collection functions to compute summary values from arrays. This post walks through an end-to-end example: calculating a total from a list of items using just two functions — `Multiply` and `Sum`. 🧾 Scenario: Line Item Totals + Order Summary You’re working with a list of order items. For each item, you want to: Compute Total = Quantity × Price Then, compute the overall OrderTotal by summing all the individual totals 📥 Input { "orders" : [ { "Quantity" : 10, "Price" : 100 }, { "Quantity" : 20, "Price" : 200 }, { "Quantity" : 30, "Price" : 300 } ] } 📤 Output { "orders" : [ { "Quantity" : 10, "Price" : 100, "Total" : 1000 }, { "Quantity" : 20, "Price" : 200, "Total" : 4000 }, { "Quantity" : 30, "Price" : 300, "Total" : 9000 } ], "Summary": { "OrderTotal": 14000 } } 🔧 Step-by-step walkthrough 🗂️ 1. Load schemas in Data Mapper Start in the Azure Data Mapper interface and load: Source schema: contains the orders array with Quantity and Price Target schema: includes a repeating orders node and a Summary → OrderTotal field 📸 Docked schemas in the mapper 🔁 2. Recognize the repeating node The orders array shows a 🔁 icon on <ArrayItem>, marking it as a repeating node. 📸 Repeating node detection 💡 When you connect child fields like Quantity or Price, the mapper auto-applies a loop for you. No manual loop configuration needed. ➗ 3. Multiply Quantity × Price (per item) Drag in a Multiply function and connect: Input 1: Quantity Input 2: Price Now connect the output of Multiply directly to the Total node under Orders node in the destination. This runs once per order item and produces individual totals: [1000, 4000, 9000] 📸 Multiply setup ➕ 4. Aggregate All Totals Using Sum Use the same Multiply function output and pass it into a Sum function. This will combine all the individual totals into one value. Drag and connect: Input 1: multiply(Quantity, Price) Input 2: <ArrayItem> Connect the output of Sum to the destination node Summary → OrderTotal 1000 + 4000 + 9000 = 14000 📸 Sum function ✅ 5. Test the Output Run a test with your sample input by clicking on the Open test panel. Copy/paste the sample data and hit Test. The result should look like this: { "orders": [ { "Quantity": 10, "Price": 100, "Total": 1000 }, { "Quantity": 20, "Price": 200, "Total": 4000 }, { "Quantity": 30, "Price": 300, "Total": 9000 } ], "Summary": { "OrderTotal": 14000 } } 🧠 Why this pattern works 🔁 Repeating to repeating: You’re calculating Total per order 🔂 Repeating to non-repeating: You’re aggregating with Sum into a single node 🧩 No expressions needed — it’s all declarative This structure is perfect for invoices, order summaries, or reporting payloads where both detail and summary values are needed. 📘 What's coming We’re working on official docs to cover: All functions including collection (Join, Direct Access, Filter, etc.) that work on repeating nodes Behavior of functions inside loops Real-world examples like this one 💬 What should we cover next? We’re always looking to surface patterns that matter most to how you build. If there’s a transformation technique, edge case, or integration scenario you’d like to see explored next — drop a comment below and let us know. We’re listening. 🧡 Special thanks to Dave Phelps for collaborating on this scenario and helping shape the walkthrough.How to get JWT token of certificate-based SPN in logic app HTTP action
Step 1: As per logic app document, we need to add "WEBSITE_LOAD_USER_PROFILE = 1" in Logic App AppSettings. Reason: For this app setting: WEBSITE_LOAD_USER_PROFILE = 1 We enable this setting so that the runtime can load the certificates. The certificates are stored in user profile. Reference: https://learn.microsoft.com/en-us/azure/connectors/connectors-native-http?tabs=standard#client-certificate-or-microsoft-entra-id-oauth-with-certificate-credential-type-authentication Step 2: Run following commands in sequential (you need to have OpenSSH installed and run with “Win64 OpenSSL Command Prompt”): openssl genrsa -out private-key.pem 3072 //generate a 3072-bit RSA private key and save it to a file named “private-key.pem”. openssl rsa -in private-key.pem -pubout -out public-key.pem //read an RSA private key from “private-key.pem”, extract the corresponding public key, and save the public key to a file named “public-key.pem”. openssl req -new -x509 -key private-key.pem -out cert.pem -days 360 //generate a self-signed X.509 certificate using the private key stored in “private-key.pem”. The certificate will be valid for 360 days and will be saved to a file named “cert.pem”, which need to be upload on SPN. openssl pkcs12 -export -inkey private-key.pem -in cert.pem -out key.pfx //create a PKCS#12 file named “key.pfx”, which contains both the private key from “private-key.pem” and the certificate from “cert.pem”. Why need a password? openssl pkcs12 -export -inkey private-key.pem -in cert.pem -out key.pfx During this process, you will be prompted to set an export password to protect the PKCS#12 file. This password will be needed when importing the PKCS#12 file into other systems or software. If no password, you may experience “cannot load private certificate exception”. Demo: Step 3: Open PowerShell and run following command: $pfx_cert = [System.IO.File]::ReadAllBytes('key.pfx') [System.Convert]::ToBase64String($pfx_cert) | Out-File 'pfx-encoded-bytes.txt' After the execution, you should be able to see a new file named “pfx-encoded-bytes.txt”. Reference: https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-securing-a-logic-app?tabs=azure-portal Demo: Step 4: Upload the certificate into an AAD app registration. Step 5: Copy the text file content (“pfx-encoded-bytes.txt”) as HTTP action certificate. Filling the password. We can test with any endpoints (eg: RequestBin) and we should be able to see the request comes in with bearer token. Advantage: It can combine Get token + Invoke endpoint. Demo: Create a HTTP action. Test. Thank you.186Views1like0CommentsUpgrading Logic Apps standard to .NET 8
Logic Apps standard runtime is currently running on .NET6, which will reach end of support by Nov 12, 2024. We are working on migrating Logic Apps standard to run on .NET 8 in the coming weeks. We will continue to support Logic Apps running on top of .NET 6 until this upgrade is completed. As part of this change, we will transparently upgrade all Logic Apps to run on .NET 8, except the ones in the following categories: Logic Apps that use NuGet based deployment model. Logic Apps that have been pinned to a particular bundle version – we do not recommend customers to do this, if you have done this, then your apps will not be migrated to .NET8 Logic Apps that are still running on top of Functions V3 – these are no longer supported, and we have stopped updating Logic Apps runtime and they will be not migrated. If your applications don't fit in any of those categories, your application will be upgraded transparently and no action is required Getting ready for this update If you have an application in one of exception categories, you need to update your application and deployment processes to include the following app setting: FUNCTIONS_INPROC_NET8_ENABLED 0 NOTE: this app setting is needed for preventing an app being automatically migrated to run on .NET 8. We will update the app setting for apps that are falling into the above categories and will notify the customers to update their deployment pipelines to prevent these values being overwritten. We will start rolling out a change that will automatically move any app that doesn’t have the above app setting to run on .NET8 the next time the app restarts. As part of the rolling out process, we will add this flag to any application that fits the exception criteria. But this will be done only once, so subsequent changes in configuration could override our setting. This is why you will need to update your processes to ensure this until the app is successfully upgrade. Manual steps needed for Apps that are not automatically migrated If your app is on the latest version and using bundle mode, then there is no action needed. However, if your app is in one of the above categories, then you need to take action to prevent it from running on .NET8 with an unsupported Logic Apps runtime version. For Customers using NuGet model control the Logic Apps runtime version are required to update the Logic Apps runtime NuGet package to a version that supports .NET8 (1.94.56 or higher). Once these apps are built using the latest NuGet package, remove the following app setting from your app and your app will automatically be moved to on top of .NET8: Building and Running your Nuget based app locally with .NET8 Download the latest Function Core Tools Ensure that the following project references version are updated: Microsoft.NET.Sdk.Functions set to "4.4.0" or later Microsoft.Azure.Workflows.WebJobs.Extension set to "1.94.56" or later Update local.settings.json to include both values below: FUNCTIONS_WORKER_RUNTIME set to "dotnet" FUNCTIONS_INPROC_NET8_ENABLED set to "1". NOTE: Once you have build and tested your application, flows to your deployed app. Updated Pinned versions for Deployed apps If have pinned to a particular bundle version or Function Host version– you are required to update to the latest bundle version and the latest Function Host version AzureFunctionsJobHost__extensionBundle__id Microsoft.Azure.Functions.ExtensionBundle.Workflows AzureFunctionsJobHost__extensionBundle__version [1.*, 2.0.0) In addition to updating the above two settings, you also need to remove the folowing app setting so that your app is moved to run on .NET8. FUNCTIONS_INPROC_NET8_ENABLED Reset the following app settings to v4 if are pinning to a particular Function Host version FUNCTIONS_EXTENSION_VERSION ~v4 NOTE: For customers who are still on Functions V3 – these apps needs to be moved to Functions V4 before they can run on .NET8. Please refer to the instructions here. Frequently Asked Questions Q: How do I prevent my app from being migrated to .NET8? FUNCTIONS_INPROC_NET8_ENABLED setting is used in determining the runtime version. By default, none of the apps will have this setting and all these apps that don’t have this setting will all be moved to run on .NET8. Users can set the value to 0 to prevent it from moving into .NET8. These apps will continue to run on .NET6. Q: What will happen if my apps is using NuGet based deployment, and I update the app setting FUNCTIONS_INPROC_NET8_ENABLED to 0 to prevent the app from being moved to .NET8? We will update your app setting for you before we move your app to .NET8. However, if you do a deployment and override this setting, then there is a likelihood of your app being broken if the Logic Apps runtime version is not supported on .NET8 Q: Why do the apps in the above categories require manual actions? Not all Logic Apps runtime versions support running on .NET8. Only versions that contains required compatibility changes will support running on .NET8. For apps in the above categories, the users are responsible for updating the Logic Apps runtime version used by the Logic App and hence the need for the manual actions. Q: .NET6 is scheduled to reach end of life by Nov 12, 2024. Will the Logic Apps running on .NET6 be supported beyond this day? Yes, we will continue to support Logic Apps on .NET6 until we complete the migration to .NET8. Q: If my app is in the above category and I’m responsible for moving the app to run on .NET8, is there due date before I’m supposed to complete the move to .NET8? We are planning to complete the migration by end of March 31, 2025 and we expect the customers have their apps moved to .NET8 before that date if the manual actions are needed.782Views0likes2CommentsLogic Apps Aviators Newsletter - April 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month April’s Ace Aviator: Massimo Crippa What's your role and title? What are your responsibilities? I am a Lead Architect at Codit, providing technical leadership and translating strategic technology objectives into actionable plans. My role focuses on ensuring alignment across teams while enforcing technical methodologies and standards. In addition to supporting pre-sales efforts, I remain hands-on by providing strategic customers with expertise in architecture, governance, and cloud best practices. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day-to-day activities are a mix of customer-facing work and internal optimization: A day could start with a sync with the quality team to assess our technical performance, evaluate the effectiveness of the provided framework, and identify areas for improvement. It might continue with discussions with our Cloud Solution Architects about customer scenarios, challenging architectural decisions to ensure robustness and alignment with best practices. Additionally, I meet with customers to understand their needs, provide guidance, and contribute to development work—I like to stay hands-on. Finally, I keep an eye on technology advancements, understanding how they fit into the company’s big picture, tracking key learnings, and ensuring proper adoption while preventing misuse. What motivates and inspires you to be an active member of the Aviators/Microsoft community? I'm passionate about cloud technology, how it fuels innovation and reshapes the way we do business.Yes, I love technology, but people are at the heart of everything. I am inspired by the incredible individuals I have met and those I will meet on this journey. The opportunity to contribute, learn, and collaborate with industry experts is truly inspiring and keeps me motivated to stay engaged in the community. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? I started my career in technology at a time when knowledge was far less accessible than it is today. There were fewer online resources, tutorials, and training materials, making self-learning more challenging. Because of this, my best advice is to develop a strong reading habit—books have always been, and still are, one of the best ways to truly dive deep into technology. Even today, while online courses and documentation provide great starting points, technical books remain unmatched when it comes to exploring concepts in depth and understanding the "why" behind the technology. What has helped you grow professionally? One key factor is the variety of projects and customers I have worked with, which has exposed me to different challenges and perspectives. However, the most impactful factor has been the mentors I have met throughout my professional journey. Having great leaders who lead by example is essential. Learning from them, applying those lessons in practice, and going the extra mile in your own way makes all the difference. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? A fully containerized Logic Apps experience, encompassing the workflow engine, connectors, and monitoring. I envision a LEGO-like modular approach, where I can pick ready-to-use container images and run only what I need, whether in the cloud or at the edge, with minimal overhead. This composable integration model would provide maximum control, flexibility, scalability, and portability for enterprise integrations. News from our product group Logic Apps Live Mar 2025 Missed Logic Apps Live in March? You can watch it here. We had a very special session, directly from Microsoft Campus in Redmon, with four of our MVPs, which were attending the MVP Summit 2025. Together, we’ve discussed the impact of community on our careers and how the AI wave impacts the integration space. Lot’s of great insights from Cameron Mckay, Mick Badran, Mattias Lögdberg and Sebastian Meyer! Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy We’re excited to announce the "Unleash AI Innovation with a Modern Integration Platform and an API-First Strategy" event. Over two action-packed days, you'll gain valuable insights from Azure leaders, industry analysts, and enterprise customers about how Azure Integration Services and Azure API Management are driving efficiency, agility, and fueling business growth in the AI-powered era. Aggregating repeating nodes in Logic Apps Data Mapper This post walks through an end-to-end example of a powerful transformation pattern in Logic Apps Data Maps: using built-in collection functions to compute summary values from arrays, taking advantage of the Sum and Multiply functions. Public Preview Refresh: More Power to Data Mapper in Azure Logic Apps We’re back with a Public Preview refresh for the Data Mapper in Azure Logic Apps (Standard) — bringing forward some long-standing capabilities that are now fully supported in the new UX. With this update, several existing capabilities from the legacy Data Mapper are now available in the new preview version — so you can bring your advanced scenarios forward with confidence. Reliable B2B Tracking using Premium SKU Integration Account We are introducing Reliable B2B Tracking in Logic Apps Standard using a Premium SKU Integration Account. This new feature ensures that all B2B transactions are reliably tracked and ingested into an Azure Data Explorer (ADX) cluster, providing a lossless tracking mechanism with powerful querying and visualization capabilities. Azure Logic Apps Hybrid Deployment Model - Public Preview Refresh We are thrilled to announce the latest public refresh of the Azure Logic Apps Hybrid Deployment Model, introducing .NET Framework custom code support on Linux containers, support for running the SAP built-in connector on Linux containers, and a Rabbit MQ built-in connector for on-premises queue scenarios. Using SMB storage with Hybrid Logic Apps Logic apps standard uses azure storage account to store artifact files such as host.json, connection.js etc,. With Hybrid environment in picture, access of azure storage account always cannot be guarenteed. And in any scenario, we can never assume that access internet will be available so assuming access to azure will be a long shot. To tackle this problem, in hybrid logic apps, we are using the SMB protocol to store artifact files. Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard Hybrid Logic Apps offer a unique blend of on-premises and cloud capabilities, making them a versatile solution for various integration scenarios. This blog will explore the scaling mechanism in hybrid deployment models, focusing on the role of the KEDA operator and its integration with other components. Sending Messages to Confluent Cloud topic using Logic App In this blog post, we will explore how to use Azure Logic Apps to send messages to Kafka Confluent topic. Although currently there is no out-of-box Kafka Confluent connector in logic app, we found that Kafka Confluent provides REST API Confluent Cloud API Reference Documentation, allowing us to use the HTTP action in workflow to call the Kafka Confluent API which produce record to topic. Access [Logic Apps / App Services] Site Files with FTPS using Logic Apps You may need to access storage files for your site, whether it is a Logic App Standard, Function App, or App Service. Depending on your ASP SKU, these files can be accessed using FTP/FTPS. Some customers encounter difficulties when attempting to connect using Implicit/Explicit FTPS. This post aims to simplify this process by utilizing a Logic App to list files, retrieve file content, and update files. Boosting The Developer Experience with Azure API Management: VS Code Extension v1.1.0 Introducing the new version of the Azure API Management VS Code extension. This update brings several exciting enhancements, including tighter integration with GitHub Copilot to assist in explaining and drafting policies, as well as improved IntelliSense functionality. Deploy Logic App Standard with Application Routing Feature Based on Terraform and Azure Pipeline This article shared a mature plan to deploy logic app standard then set the application routing features automatically. It's based on Terraform template and Azure DevOps Pipeline. Logic Apps Aviators Community Playbook We are excited to announce the latest articles from the Logic Apps Aviators Community Playbook. Interested in contributing? We have made it easy for you to get involved. Simply fill out our call for content sign-up link with the required details and wait for our team to review your proposal. And we will contact you with more details on how to contribute. Streamline deployment for Azure Integration Services with Azure Verified Modules for Bicep Author: Viktor Hogberg Azure Verified Modules is a single Microsoft standard that gives you a unified experience to streamline deployment for Bicep modules and publish them in the public Azure Bicep Registry in GitHub. These modules speed up your experience when working with deployments - no more guessing, copying and pasting, or unclear dependencies between resources. Learn how to use those modules to deploy Azure Integration Services efficiently! News from our community Properly securing Logic App Standard with Easy Auth Post by Calle Andersson In this post, Calle shows you how to configure authentication with only the minimal settings required to lock things down. This makes automation possible since it doesn't require high privileged permissions to be given to a pipeline. Integration Love Story with Massimo Crippa Video by Ahmed Bayoumy In the latest episode of Integration Love Story, we meet Massimo Crippa – an Italian living in France, working in Belgium, and deeply involved in Microsoft's integration platforms. With a background ranging from COM+ and SSIS to today's API Management and Logic Apps Hybrid, Massimo shares insights from a long and educational career. Logic App Standard workflow default page can be changed Post by Sandro Pereira In Azure Logic Apps (Standard), you can change the default page that loads when you open a workflow in the Azure Portal! Azure has introduced a Set as default page option in the Azure Portal for Logic App Standard workflows. This allows you to customize which tab (like Run History or Designer) opens by default when you enter a workflow. Enhancing Operational Visibility: Leveraging Azure Workbooks series Post by Simon Clendon This is blog post series that describes how to use the power of Log Analytics Workbooks to display tracking data that can be filtered, sorted, and even exported to Excel. The post will also show how related records can be listed in grids based on a selection in another grid, taking the Logic Apps tracking data that is sent to Log Analytics when diagnostics are captured to the next level. Logic App Standard: Trigger History Error – InvalidClientTrackingId Post by Sandro Pereira In this article, Sandro Pereira addresses the issue of InvalidClientTrackingId errors in Azure Logic Apps when customizing the Custom tracking ID property of workflow triggers. Logic Apps in VS Code fail to save if the Designer can’t resolve your API ID Post by Luis Rigueira Learn how to ensure your app settings are correctly configured when building Logic Apps (Standard) in VS Code, especially when using the Designer with API Management, to prevent save issues and streamline your development process.312Views0likes0CommentsHybrid deployment model for Logic Apps- Performance Analysis and Optimization recommendations
A few weeks ago, we announced the Public Preview Refresh release of Logic Apps hybrid deployment model that allows customers to run Logic Apps workloads on a customer managed infrastructure. This model provides the flexibility to execute workflows, either on-premises or in any cloud environment, thereby offering enhanced control over the operation of logic apps. By utilizing customer-managed infrastructure, organizations can adhere to regulatory compliance requirements and optimize performance according to their specific needs. As customers consider leveraging hybrid environments, understanding the performance of logic apps under various configurations and scenarios becomes critical. This document offers an in-depth performance evaluation of Azure Logic Apps within a hybrid deployment framework. It examines, several key factors such as CPU and memory allocation and scaling mechanisms, providing valuable insights aimed at maximizing the application’s efficiency and performance. Achieving Optimal Logic Apps Performance in Hybrid Deployments In this section, we will explore the key aspects that affect Logic Apps performance when deployed in a hybrid environment. Factors such as the underlying infrastructure of the Kubernetes environment, SQL configuration and scaling configuration can significantly impact the efficiency of workflows and the overall performance of the applications. The following blog entry provides details of the scaling mechanism of Hybrid deployment model - Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard | Microsoft Community Hub Configure Container Resource allocation: When you create a Logic App, a default value of 0.5 vCPU and 1GiB of memory would be allocated. From the Azure Portal, you can modify this allocation from the Container blade. - Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn Currently, the maximum allocation is set to 2vCPU and 4 GiB memory per app. In the future, there would be a provision made to choose higher allocations. For CPU intense/memory intense processing like custom code executions, select a higher value for these parameters. In the next section, we will be comparing the performance with different values of the CPU and memory allocation. This allocation would impact the billing calculation of the Logic App resource. Refer vCPU calculation for more details on the billing impact. Optimize the node count and size in the Kubernetes cluster. Kubernetes runs application workloads by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. A node pool is a group of nodes that share the same configuration (CPU, Memory, Networking, OS, maximum number of pods, etc.). You can choose the capacity (cores and memory), minimum node count and maximum node count for each node pool of the Kubernetes cluster. We recommend allocating a higher capacity for processing CPU intense, or memory intense applications Configure Scale rule settings: For a Logic App resource, we recommend you configure the maximum and minimum replicas which could be scaled out when a scale event occurs. A higher value for the max replicas helps in sudden spikes in the number of application requests. The interval with which the scaler checks for the scaling event and the cooldown period for the scaling event can also be configured from the Scale blade of Logic Apps resource. These parameters impact the scaling pattern. Optimize the SQL server configuration: The hybrid deployment model uses Microsoft SQL for runtime storage. As such, there are lot of SQL operations performed throughout the execution of the workflow and SQL capacity has a significant impact on the performance of the app. Microsoft SQL server could either be a SQL server on Windows, or an Azure SQL database. Few recommendations on the SQL configuration for better performance: If you are using, Azure SQL database, run it on a SQL elastic pool. If you are using SQL server on Windows, run with at least 4vCPU configuration. Scale out the SQL server once the CPU usage of the SQL server hits 60-70% of the total available CPU. Performance analysis: For this performance analysis exercise, we have used a typical enterprise integration scenario which includes the below components. Data transformation: XSLT transformation, validation, and XML parsing actions Data routing: File system connector for storing the transformed content in a file share. Message queuing: RabbitMQ connector for sending the transformation result to Rabbit MQ queue endpoint. Control operations: For-each loop for looping through multiple records, condition execution, Scope, and error handling blocks. Request response: The XML data transmitted via HTTP request, and the status returned as a response. Summary: For these tests, we used the following environment settings: Kubernetes cluster: AKS cluster with Standard D2sV3 (2vCPU, 8GiBmemory) Max replicas: 20 Cooldown period: 300 seconds Polling interval: 30 With the above environment and settings, we have performed multiple application tests with different configuration of SQL server, resource allocation and test durations using Azure load testing tool. In the following table, we have summarized the response time, throughput, and the total vCPU consumption for each of these configurations. You can check each scenario for detailed information. Configuration Results Scenario SQL CPU and Memory allocation per Logic App Test duration Load 90 th Percentile Response time Throughput Total vCPU consumed Scenario 1 SQL general purpose V2 1vCPU/2GiB Memory 10 minutes with 50 users 503 requests 68.62 seconds 0.84/s 3.42 Scenario 2 SQL Elastic pool-4000DTU 1vCPU/2GiB Memory 10 minutes with 50 users 1004 requests 40.74 seconds 1.65/s 3 Scenario 3 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 10 minutes with 50 users 997 requests 40.63 seconds 1.66/s 4 Scenario 4 SQL Elastic pool-4000DTU 2vCPU/4GiB Memory 30 minutes with 50 users 3421 requests 26.6Seconds 1.9/s 18.6 Scenario 5 SQL Elastic pool-4000DTU 0.5vCPU/1GiB Memory 30 minutes with 50 users 3055 requests 31.38 seconds 1.7/s 12.4 Scenario 6 SQL 2022 Enterprise on Standard D4s V3 VM 0.5vCPU/1GiB Memory 30 minutes with 50 users 4105 requests 27.15 seconds 2.28/s 10 Scenario 1: SQL general purpose V2 with 1vCPU and 2 GiB Memory – 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on General purpose V2 plan. There were 503 requests with multiple records in each payload and it achieved the 68.62 seconds as the 90 th percentile response time and a throughput of 0.84 requests per second. Scaling: The Kubernetes nodes scaled out to 12 nodes and in total 3.42 vCPUs used by the app for the test duration. SQL Metrics: The CPU usage of the SQL server reached 90% of CPU usage quite early and stayed above 90% for the remaining duration of the test. From our backend telemetry as well, we observed that the actions executions were faster, but there was latency between the actions, which indicates SQL bottlenecks. Scenario 2: SQL elastic pool, with 1vCPU and 2 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of: 1 vCPU and 2 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 1004 requests with multiple records in each payload and it achieved the 40.74 seconds as the 90 th percentile response time and a throughput of 1.65 requests per second. Scaling: The Kubernetes nodes scaled out to 15 nodes and in total 3 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 2% of the elastic pool. Scenario 3: SQL elastic pool, with 2vCPU and 4 GiB memory- 10 minutes test with 50 users In this scenario, we conducted a load test for 10 minutes with 50 users with the Logic App configuration of 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 997 requests with multiple records in each payload and it achieved the 40.63 seconds as the 90 th percentile response time and a throughput of 1.66 requests per second. Scaling: The Kubernetes nodes scaled out to 21 nodes and in total 4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 5% of the elastic pool. Scenario 4: SQL elastic pool, with 2vCPU and 4 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 2 vCPU and 4 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3421 requests with multiple records in each payload and it achieved the 26.67 seconds as the 90 th percentile response time and a throughput of 1.90 requests per second. Scaling: The Kubernetes nodes scaled out to 20 nodes and in total 18.6 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 4.7% of the elastic pool. Scenario 5: SQL Elastic pool, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we have conducted a load test for 30 minutes with 50 users with the Logic App configuration of 0.5 vCPU and 1 GiB Memory and Azure SQL database running on a SQL elastic pool with 4000 DTU. There were 3055 requests with multiple records in each payload and it achieved the 31.38 seconds as the 90 th percentile response time and a throughput of 1.70 requests per second. Scaling: The Kubernetes nodes scaled out to 18 nodes and in total 12.4 vCPUs used by the app for the test duration. SQL Metrics: The SQL server’s CPU utilization peaked to 8.6% of the elastic pool CPU. Scenario 6: SQL 2022 Enterprise Gen2 on Windows 2022 on Standard D4s v3 image, with 0.5vCPU and 1 GiB memory- 30 minutes test with 50 users In this scenario, we conducted a load test for 30 minutes with 50 users with the Logic App configuration of: 0.5 vCPU and 1 GiB Memory and Azure SQL database running on an on-premises SQL 2022 Enterprise Gen2 version running on a Windows 2022 OS with Standard D4s v3 image (4 vCPU and 16GIB memory) There were 4105 requests with multiple records in each payload and it achieved the 27.15 seconds as the 90 th percentile response time and a throughput of 2.28 requests per second. Scaling: The Kubernetes nodes scaled out to 8 nodes and in total 10 vCPUs used by the app for the test duration. SQL metrics: The CPU usage of the SQL server went above 90% after few minutes and there was latency on few runs. Findings and recommendations: The following are the findings and recommendations for this performance exercise. Consider that this load test was conducted using unique conditions. If you conduct a similar test, the results and findings might vary, depending on factors such as workflow complexity, configuration, resource allocation and network configuration. The KEDA scaler performs the scale-out and scale-in operations faster, as such, while the total vCPU usage remains quite low, though the nodes scaled out in the range of 1-20 nodes. The SQL configuration plays a crucial role in reducing the latency between the action executions. For a satisfactory load test, we recommend starting with at least 4vCPU configuration on SQL server and scale out once CPU usage of the SQL server hits 60-70% of the total available CPU. For critical applications, we recommend having a dedicated SQL database for better performance. Increasing the dedicated vCPU allocation of the Logic App resource is helpful for the SAP connector, Rules Engine, .NET Framework based custom code operations and for the applications with many complex workflows. As a general recommendation, regularly monitor performance metrics and adjust configurations to meet evolving requirements and follow the coding best practices of Logic Apps standard. Consider reviewing the following article, for recommendations to optimize your Azure Logic Apps workloads: https://techcommunity.microsoft.com/blog/integrationsonazureblog/logic-apps-standard-hosting--performance-tips/3956971🔁 Public Preview Refresh: More Power to Data Mapper in Azure Logic Apps
We’re back with a Public Preview refresh for the Data Mapper in Azure Logic Apps (Standard) — bringing forward some long-standing capabilities that are now fully supported in the new UX. In our initial announcement, we introduced a redesigned experience focused on usability, error handling, and improved mapping for complex schemas. As we continue evolving the tool, we’re working to bring feature parity with the classic experience, while layering in modern enhancements along the way. With this update, several existing capabilities from the legacy Data Mapper are now available in the new preview version — so you can bring your advanced scenarios forward with confidence. 🛠️ Run XSLT Inside Your Data Map The ability to apply XSLT has long been a powerful feature in Logic Apps, and we’re excited to bring Run XSLT support into the new UX. You can now invoke reusable transformation logic from your map, including: Enterprise-grade XSLT Predefined templates or logic from your BizTalk workflows How to try it out: Create a new data map. Right-click on the MapDefintions or Maps folder and click Create new data map Store the XSLT file under Artifacts -> DataMapper/Extension -> InlineXslt. Open the data map and search for Run XSLT in the functions panel. Select the function and simply select the function you want to run from the dropdown Connect to desired destination node. In my case, the function simply adds a "Placeholder" value for the Name node at destination, alongside an "EmployeeType" node. Note that you do not need to connect any source node to the XSLT function given this is custom XSLT logic that will be applied directly at destination node. Upon testing the map, right value is generated in the destination schema 🔍 Execute XPath to Extract Targeted Values Execute XPath is now supported in the new experience, giving you control to extract specific values from nested XML structures. This function is particularly useful for: Accessing attributes and nested elements Applying logic based on the structure or content of incoming data How to try it out: Search for Execute XPath in the functions panel. Select the function and add the expression you want to extract Map it to destination node. Here is what the map will look like: The test payload correctly creates multiple Address nodes at destination based on the Address node at source. 🧩 Use Custom XML Functions Custom XML functions allow you to define and reuse logic across your map. This helps reduce duplication and supports schema-specific transformations. Now that support is available in the new UX, you can: Wrap complex logic into manageable components Handle schema-specific edge cases with ease How to try it out: Add the .xml function file under Artifacts -> DataMapper/Extension -> Functions Open the data map and under Utility category of functions, select the new function. In our case, the xml function is called Age Connect function input to Date_of_Birth node at source and output to Age node at destination. The map will look something like this Test the map and notice that the age is calculated correctly at the destination node 🌒 Dark Mode Support in VS Code The new UX now respects Dark Mode in VS Code, giving you a visually cohesive and low-contrast authoring experience — perfect for long mapping sessions. No extra steps needed — Dark Mode works automatically based on your VS Code theme settings. ⚙️ How to Enable the New Experience If you haven’t yet tried the new UX: Open your Logic Apps (Standard) project in VS Code Go to Logic Apps (Standard) extension → Settings → Data Mapper Select Version ~2 You’ll find detailed walkthroughs in the initial preview announcement blog. 💬 We’d Love Your Feedback We’re continuously evolving the Data Mapper, and your feedback is key to getting it right — especially as we bring more advanced transformation scenarios into the new experience. 👉 Submit your feedback here 🐛 Found an issue or have a specific feature request? Let us know on GitHub Issues Thanks again for being part of the journey — more updates coming soon! 🚀