logic apps
343 TopicsRecent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized
Hi everyone, I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the TimeGenerated parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting. For example, this basic KQL query: DeviceLogonEvents | where TimeGenerated >= ago(30d) | where LogonType != "Local" | where DeviceName !contains ".fr" | where DeviceName !contains "shared-" | where DeviceName !contains "gdc-" | where DeviceName !contains "mon-" | distinct DeviceName Now throws the error: Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query. Removing TimeGenerated makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI. This issue started affecting a Logic App that runs weekly—it worked on May 11th but failed on May 18th. Questions: Has there been a recent schema change or deprecation of TimeGenerated in Defender ATP's KQL for Logic Apps? Is there an alternative column or syntax we should use now? Are others experiencing this? Any insights or workarounds would be greatly appreciated!78Views1like3CommentsLogic Apps Aviators Newsletter - July 25
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month July’s Ace Aviator: Şahin Özdemir What's your role and title? What are your responsibilities? I currently work for Rubicon Cloud Advisor, a Dutch company specialized in digital transformations, cloud adoption and AI implementation. At Rubicon I fulfil the role of Application and Integration architect, while also being a Professional Scrum Trainer at Scrum.org. Even though this sounds like two completely different roles, in practice both go closely hand in hand. I firmly believe that good architecture, a strong development process, and application of best practices are key pillars for delivering high-quality solutions to my clients. Therefore, both roles come in handy in my day-to-day job (combined with my strong background in software development).\ I work closely with companies and their teams in making their journey to Azure - especially Azure Integration Services - successful. Most of the time this journey starts with a business need or challenge, and I work with my clients to get a deeper understanding of their needs. This results in further analysis, capturing requirements, defining architecture, solution design, setting the stage for development (ALM) and being involved in quality assurance. At the same time, I think it’s important to stay relevant from a technical perspective. That’s why I also like being involved with implementing the solution. This way, I hear the technical struggles teams face and I can help them to find the right solution. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Not a single day is the same, although there are some recurring activities. Specific parts of my day (or sprint) are dedicated to Scrum-related activities - whether it's participating in the daily scrum, having sprint reviews with stakeholders, planning the next sprint, or refining the backlog with the team or just aligning with the PO or stakeholders. I’m frequently involved in cross-organizational meetings focused on projects at scale. I contribute from the perspective of architecture, technical expertise, and integration strategy. In my role as a solution architect, I'm engaged in designing and implementing a critical integration platform for my client. This platform connects and exchanges data between many internal departments and external vendors - an effort that requires frequent alignment and collaboration. I’m always looking for opportunities to expand our Hybrid Integration Platform itself. Exploring how Azure resources may add value to our platform and working closely with the team to realize such improvements to the platform’s capabilities is something I enjoy. Outside of the regular meetings, I often focus on designing new integrations. Having working sessions with stakeholders to understand what they want. Based on these discussions, I assess the technical and architectural aspects of the solution. Every integration that lands on the platform is measured against both architectural and development principles and guidelines. I contribute to reviewing the solutions that have been developed. Ensuring that each integration is high-quality, consistent, easy to understand, and maintainable. I support the platform team with, and whenever possible. And if time permits, I develop parts of the solution myself – I see this as a great way to stay relevant from a technological perspective. All the spare time I have, I spend on writing technical articles that may help others. What motivates and inspires you to be an active member of the Aviators/Microsoft community? Because I enjoy helping others. Every day I work with a team of smart professionals on integration solutions and custom code within the Azure platform. Along the way, we regularly encounter challenges, limitations, or issues. In those moments, it's incredibly helpful to find solutions online or to have a community that can think along with you. Over the past few years, there have been many occasions where I just couldn’t find a solution online for a technical problem with Logic Apps. In these cases, we either came up with a creative solution ourselves or received support from Microsoft. When the integration community faces a similar challenge, it’s pretty much wasteful to tackle the same hurdles again. By documenting an approach or solution, others may be saving their invaluable time looking for a solution. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? It is ok that you don’t know everything. Just start doing, experiment, stay curious, challenge yourself, don’t be afraid to ask questions, fail, learn and keep going! What has helped you grow professionally? I have spent a fair amount of my career at a big consulting firm. I started off as a software engineer all the way up to senior manager and architect. A long journey like that gives great and well-dosed opportunities and learning experiences to focus on your technical (in-depth) skillset first, continued by working on you soft skills like consulting, guiding and leading teams, solutioning and architecture. If I had not followed this path at that company, I would not be the person I am now professionally. Be ok with the fact that growth doesn’t happen overnight -no shortcuts, no magic pills. It's like a good red wine that needs time to mature. So do many challenging projects, become all-round and then choose a specialization, ask for constructive feedback, fail many times and take your time to reflect and learn. And don’t forget to have a strong work ethic and ongoing curiosity to learn new things. In the end I found that - from a technological perspective-, quality attributes (the “-illities”), enterprise application integration and scrum made my heart skip a beat. So my advice is to always pursue what brings you joy! If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Overall, I must say that I’m happy with the current state of Logic Apps. Nevertheless, if I had a magic wand: I would like to see that the service plans for Logic App standard would be in line with Function Apps. The plans for Function apps have way better tiers from both memory, cores and pricing perspective. And being able to scale out and in based on specific metrics is more flexible than Logic App Standard currently offers. Having more CPU/memory available in the plans would also improve the overall performance of Logic Apps in general, even though performance optimizations of many actions would also be more than welcome. What I currently really miss in the HTTP connector (and possibly others) is the ability to have better control over the request timeouts. Even though the setting is there, it is capped to 4 minutes max. In practice, we need to deliver data to external APIs that work synchronously and take more time to complete. Giving better control on these timeouts would make the usability of workflows even better! Even though some nice additions to the initialization of variables have been made recently, I would like to see the ability to initialize variables at any point in the workflow. E.g. the foreach loop can be executed in parallel, and therefore the current global variables are not thread-safe, which leads to unexpected behavior. News from our product group Logic Apps Live June 2025 Missed Logic Apps Live in June? You can watch it here. We focused on the Logic Apps big announcements from Integrate 2025. There are a lot of great things to check! Feedback Opportunity: SRE Agent + Logic Apps Discover the new Applications feature in Azure API Management, enabling OAuth-based access to APIs and products. Streamline secure API access with built-in OAuth 2.0 application-based authorization. Configure SQL Storage for Standard Logic Apps Azure Logic Apps traditionally rely on Azure Storage to manage workflow states and runtime data. However, with the introduction of SQL as a storage provider (currently in preview), developers now have a compelling alternative that offers greater control, flexibility, and integration with existing SQL infrastructure. This post explores the benefits, configuration steps, and considerations for using SQL storage with Standard Logic Apps. Announcing General Availability: Azure Logic Apps Standard Automated Test Framework We’re excited to announce the General Availability (GA) of the Azure Logic Apps Standard Automated Test Framework—a major step forward in enabling developers to build, test, and maintain enterprise-grade workflows with confidence and agility. Announcing General Availability: Azure Logic Apps Standard Custom Code with .NET 8 We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. Business Process Tracking Reaches General Availability Business Process Tracking provides key insights to business stakeholders from your Logic Apps (Standard) implementation in an efficient and timely manner. Today, we are pleased to announce the General Availability of this capability, allowing customers to leverage in their production workloads. Announcement: General Availability of Logic Apps Hybrid Deployment Model We’re excited to announce the Public Preview of two major integrations that bring the power of Azure Logic Apps to AI Agents in Foundry – Logic Apps as Tools and AI Agent Service Connector. Learn more on our announcement post! Announcing Public Preview: Organizational Templates in Azure Logic Apps We’re excited to announce the Public Preview of Organizational Templates in Azure Logic Apps— empowering teams to author, share, and reuse automation patterns across their organization. With this release, we’re also rolling out a brand-new UI experience to easily create templates directly from your workflows—no manual packaging required! OpenTelemetry in Azure Logic Apps (Standard and Hybrid) OpenTelemetry provides a unified, vendor-agnostic framework for collecting telemetry data—logs, metrics, and traces—across different services and infrastructure layers. It simplifies monitoring and makes it easier to integrate with a variety of observability backends such as Azure Monitor, Grafana Tempo, Jaeger, and others. For Logic Apps—especially when deployed in hybrid or on-premises scenarios—OpenTelemetry is a powerful addition that elevates diagnostic capabilities beyond the default Application Insights telemetry. Logic App Standard - When High Memory / CPU usage strikes and what to do Monitoring your applications is essential, as it ensures that you know what's happening and you are not caught by surprise when something happens. One possible event is the performance of your application starting to decrease and processing becomes slower than usual. This may happen due to various reasons, and in this blog post, we will be discussing the High Memory and CPU usage and why it affects your Logic App. We will also observe some possibilities that we've seen that have been deemed as the root cause for some customers. Introducing Agent in a Day Agent in a Day represents a fantastic opportunity for customers to participate in hackathon-style contests where attendees learn how to build agents and then can apply them to their unique business use cases. For Partners, Agent in a Day represents a great way to engage your customers by building agents with them and uncovering new use cases. Introducing Confluent Kafka Connector (Public Preview) We are pleased to announce the introduction of the Confluent Kafka Connector in Logic Apps (Standard) which allows you to both send and receive messages between Logic Apps and Confluent Kafka. Confluent Kafka is a distributed streaming platform for building real-time data pipelines and streaming applications. It is used across many industries including financial services, Omnichannel retail, autonomous cars, fraud detection services, microservices and IoT deployments. Our current connector offering supports both triggers (receive) and sending (publish) within Logic Apps. News from our community Logic App Standard: Throw exceptions like a pro! Post by Şahin Özdemir Learn how to throw exceptions in Logic App Standard using a simple Compose action—no code needed, just clever workflow design. Azure Logic Apps: are you handling large blobs? Keep memory usage under control. Post by Stefano Demiliani Struggling with large blob files in Logic Apps? Learn how to keep memory usage under control and avoid out-of-memory errors with smart workflow design and a few performance-boosting tricks De SOAPing Services SOAP to REST using Azure API Management Video by Stephen W Thomas Struggling with legacy SOAP integrations from BizTalk to Azure? Check out this video on simplifying SOAP-to-REST conversions using Azure API Management and learn how easily you can manage SOAP envelopes and streamline your Logic Apps integrations! Integrating Entra ID and AI Agent workflows in Azure Logic Apps Post by Brian Veldman Discover how to build AI-powered workflows in Azure Logic Apps that interact with Entra ID, automate tasks, and adapt dynamically using agentic tools and OpenAI models. Advanced KQL Queries for Logic Apps in Application Insights: A Practical Guide Post by Dieter Gobeyn Boost Logic App performance with advanced KQL queries in Application Insights—spot bottlenecks, analyze slow actions, and optimize workflows without upgrading your hosting plan. How to Build an AI Agent with Azure Logic Apps Post by Cameron McKay Learn how to build your first AI Agent in Azure Logic Apps using Agent Loop—connect to OpenAI, design smart prompts, and automate tasks like weather reporting with low-code workflows. You Can Now Initialize All Your Variables In One Single Action Post by Luis Rigueira You can now initialize multiple variables in Logic Apps with a single action—making your workflows cleaner, faster, and easier to manage. It is a Friday Fact, brought to you by Luis Rigueira! Integration Insights Podcast: The Future of Integration Video by Sagar Sharma and Jochen Toelen In this two-part episode of the Integration Insights podcast, Sagar, Joechen and Kent dive into how integration is evolving in a cloud-first world. From BizTalk migrations to hybrid deployments with Azure Arc, they share practical insights and best practices to future-proof your integration strategy. A must-listen! You can watch part 2 here. Event Grid vs Service Bus vs Event Hubs vs Storage Queues: Choosing the Right Messaging Backbone in Azure Post by Prashant Singh Confused by Azure’s messaging options? This guide breaks down Event Grid, Service Bus, Event Hubs, and Storage Queues—helping you choose the right tool for real-time events, telemetry, enterprise workflows, or lightweight tasks. IntelliSense in Logic Apps Just Got Smarter – Matching Brackets in the Expression Editor! Post by Sandro Pereira Logic Apps just got a lot friendlier—bracket matching in the expression editor now highlights pairs as you type, making it easier to write and debug complex expressions.A Friday Fact from Sandro Pereira. How to Build Resilient Integrations for Mission-Critical Systems Post by Lilan Sameera Learn how to build resilient integrations for mission-critical systems using Logic Apps, Service Bus, and Event Hub—ensuring reliable data delivery, smart retries, and clean outputs even under pressure.Logic App Standard - When High Memory / CPU usage strikes and what to do
Introduction Monitoring your applications is essential, as it ensures that you know what's happening and you are not caught by surprise when something happens. One possible event is the performance of your application starting to decrease and processing becomes slower than usual. This may happen due to various reasons, and in this blog post, we will be discussing the High Memory and CPU usage and why it affects your Logic App. We will also observe some possibilities that we've seen that have been deemed as the root cause for some customers. How High Memory and high CPU affects the processing When the instructions and information are loaded into Memory, they will occupy a space that cannot be used for other sets of instructions. The more memory is occupied, the more the Operative System will need to "think" to find the correct set of instructions and retrieve/write the information. So if the OS needs more time to find or write your instructions, the less time it will spend actually doing the processing. Same thing for the CPU. If the CPU load is higher, it will slow down everything, because the available workers are not able to "think" multiple items at the same time. This translates into the Logic App processing, by the overall slowness of performance. When the CPU or Memory reach a certain threshold, we start to see the run durations going up and internal retries increasing as well. This is because the runtime workers are busy and the tasks have timeout limits. For example, let's think of a simple run with a Read Blob built-in connector action, where the Blob is very large (let's say 400MB). The flow goes: Request Trigger -> Read blob -> send email The trigger has a very short duration and doesn't carry much overhead, because we're not loading much data on it. The Read Blob though, will try to read the payload into Memory (because we're using a Built-in Connector, and these load all the information into Memory). Built-in connector overview - Azure Logic Apps | Microsoft Learn So, not considering background processes, Kudu and maintenance jobs, we've loaded 400MB into memory. Using a WS1 plan, we have 3.5GB available. By just having a blank Logic App, you will see some memory occupied, although it may vary. So, if we think it takes 500MB for the base runtime and some processes, it leaves us with 3GB available. If we load 4 files at the same time, we will be using ~1.8GB (files + base usage). Already using about 50% of the memory. And this is just for one workflow and 4 runs. Of course the memory is released after the run completes, but if you think on a broader scale, with multiple runs and multiple actions at the same time, you see how easy it is to reach the thresholds. When we see memory over ~70%, the background tasks may behave in unexpected ways, so it's essential to have a clear idea on how your Logic App is working and what data you're loading into it. Same thing for CPU. The more you load into it, the slower it gets. You may have low memory usage, but if you're doing highly complex tasks such as XML transformations or some other built-in data transforms, your CPU will be heavily used. And the bigger the file and the more complex the transformation, the more CPU will be used. How to check memory/CPU Correctly monitoring your resources usage is vital and can avoid serious impact. To help your Standard logic app workflows run with high availability and performance, the Logic App Product Group has created the Health Check feature. This feature is still in Preview, but it's already a very big assistance in monitoring. You can read more about it, in the following article, written by our PG members, Rohitha Hewawasam and Kent Weare: Monitoring Azure Logic Apps (Standard) Health using Metrics And also the official documentation for this feature: Monitor Standard workflows with Health Check - Azure Logic Apps | Microsoft Learn The Metrics can also assist in providing a better view on the current usage. Logic App Metrics don't drill down on CPU usage, because those metrics are not available at App level, but rather at App Service Plan level. You will be able to see the working memory set and Workflow related metrics. Example metric: Private Bytes (AVG) - Logic App metrics On the AppService Plan overview, you will be able to see some charts with these metrics. It's an entry point to understand what is currently going on with your ASP and the current health status. Example: ASP Dashboard In the Metrics tab, you are able to create your own charts with a much greater granularity and also save as Dashboards. You're also able to create Alerts on these metrics, which greatly increases your ability to effectively monitor and act on abnormal situations, such as High Memory usage for prolonged periods of time. Example: Memory Percentage (Max) - ASP Metrics Currently there are multiple solutions provided to analyze your Logic App behavior and metrics, such as Dashboards and Azure Monitor Logs. I highly recommend reading these two articles from our PG that discuss these topics and explain and exemplify this: Logic Apps Standard Monitoring Dashboards | Microsoft Community Hub Monitoring Azure Logic Apps (Standard) with Azure Monitor Logs How to mitigate - a few possibilities Platform settings on 32 bits If your Logic App was created long back, it may be running on an old setting. Early Logic Apps were created with 32 bits, which severely limited the Memory usage scalability, as this architecture only allows a maximum of 3GB of usage. This comes from the Operative System limitations and memory allocation architecture. After some time, the standard was to create Logic Apps in 64 bits, which allowed the Logic App to scale and fully use the maximum allowed memory in all ASP tiers (up to 14GB in WS3). This can be checked and updated in the Configuration tab, under Platform settings. Orphaned runs It is possible that some runs do not finish due to various possibilities. Either they are long running or have failed due to unexpected exceptions, runs that linger in the system will cause an increase in memory usage, because the information will not be unloaded from Memory. When the runs become orphaned, they may not be spotted but they will remain "eating up" resources. The easiest way to find these runs, is to check the workflows and under the "Running" status, check which ones are still running well after the expected termination. You can filter the Run History by Status and use this to find all the runs that are still in "Running". In my example, I had multiple runs that had started hours before, but were not yet finished. Although this is a good example, it requires you to check each workflow manually. You can also do this by using Log Analytics and execute a query to return all the Runs that are not yet finished. You need to activate the Diagnostic Settings, as mentioned in this blog post: Monitoring Azure Logic Apps (Standard) with Azure Monitor Logs To make your troubleshooting easier, I've created a query that does this for you. It will check only for the Runs and return those that do not have a matching Completed status. The OperationName field will register the Start, Dispatch and Completed status. By eliminating the Dispatched status, we're left with Start and Completed. Therefore, this query should return all the RunIDs that have a Start but not a matching Completed status, as it groups them by counting the RunIDs. LogicAppWorkflowRuntime | where OperationName contains "WorkflowRun" and OperationName != 'WorkflowRunDispatched' | project RunId, WorkflowName, TimeGenerated,OperationName, Status, StartTime,EndTime | summarize Runs = count() by RunId, WorkflowName, StartTime | where Runs == 1 | project RunId, WorkflowName, StartTime Large payloads As previously discussed, large payloads can create a big overload and increase greatly the memory usage. This applies not only to the Built-in Connectors, but also to the Managed connectors, as the information stills needs to be processed. Although the data is not loaded into the ASP memory when it comes to Managed connectors, there is a lot of data flowing and being processed in CPU time. A Logic App is capable of processing a big amount of data, but when you combine large payloads, a very large number of concurrent runs, along with large number of actions and Incoming/Outgoing requests, you get a mixture that if left unattended and continues to scale up, will cause performance issues over time. Usage of built-in connectors The Built-in Connectors (or In App) are natively run in the Azure Logic Apps runtime. Due to this, the performance, capabilities and pricing are better, in most cases. Because they are running under the Logic App Runtime, all the data will be loaded in-memory. This requires you to do a good planning of your architecture and forecast for high levels of usage and heavy payload. As previously shown, using Built-in connectors that handle very large payloads, can cause unexpected errors such as Out of Memory exceptions. The connector will try to load the payload into memory, but if starts to load and the memory is no longer available, it can crash the worker and return an Out Of Memory exception. This will be visible in the Runtime logs and it may also lead to the run becoming orphaned, as it get stuck in a state that is not recoverable. Internally, the Runtime will attempt to gracefully retry these failed tasks, and it will retry multiple times. But there is always the possibility that the state is not recoverable and thus the worker crashes. This makes it also necessary to closely monitor and plan for high usage scenarios, in order to properly scale Up and Out your App Service Plans. Learnings Monitoring can be achieved as well through the Log Stream, which requires you to configure a Log Analytics connection, but can provide a great deal of insight of what the Runtime is doing. It can give you Verbose level or simply Warning/Error levels. It does provide a lot of information and can be a bit tricky to read, but the level of detail it provides, can be a huge assistance in troubleshooting from your side and from the Support side. For this, you can navigate to your Log Stream tab, enable it, change to "Filesystem Logs" and enjoy the show. If an Out of Memory exception is caught, it will show up in red letters (as other exceptions show), and will be something similar to this: Job dispatching error: operationName='JobDispatchingWorker.Run', jobPartition='', jobId='', message='The job dispatching worker finished with unexpected exception.', exception='System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Threading.Thread.StartCore() at System.Threading.Thread.Start(Boolean captureContext) at System.Threading.Thread.Start() No PROD Logic App should be left without monitoring and alerting. Being a critical system or not, you should always plan not only for disaster scenarios but also for higher than usual volumes, because nothing is static and there's always the possibility that the system that today has a low usage, will be scaled and will be used in some way that it was not intended to. For this, implementing the monitoring on the resources metrics is very valuable and can detect issues before they get overwhelming and cause a show stopper scenario. You can use the Metrics from the Logic App that are provided out of the box, or the metrics in the ASP. These last metrics will cover a wider range of signals, as it's not as specific as the ones from the Logic App. You can also create custom Alerts from the Metrics and thus increasing your coverage on distress signals from the Logic App processing. Leaving your Logic App without proper monitoring will likely catch you, your system administrators and your business by surprise when the processing falls out of the standard parameters and chaos starts to arise. There is one key insight that must be applied whenever possible: expect the best, prepare for the worst. Always plan ahead, monitor the current status and think proactively and not just reactively. Disclaimer: The base memory and CPU values are specific for your app, and it can vary based on number of apps in App Service Plan, the number of instances you have as Always Running, etc, and number of workflows in the app, and how complex these workflows are and what internal jobs needs to be provisioned.How to use azure logic app to update AAD user’s password automatically
Scenario Azure logic app is an extraordinary cloud automation application. For updating Azure Active Directory user’s password in batches and automatically, azure logic app consumption or a logic app standard can invoke Azure Active Directory Graph API but it requires specific permissions. References passwordAuthenticationMethod: resetPassword - Microsoft Graph beta | Microsoft Learn Sign in with resource owner password credentials grant - Microsoft Entra | Microsoft Learn List passwordMethods - Microsoft Graph beta | Microsoft Learn Update user - Microsoft Graph v1.0 | Microsoft Learn Services Used Azure Logic App (Consumption or Standard) Azure Active Directory (AAD) Solution 1 1.Create an AAD application registration 2.Add permission: UserAuthenticationMethod.ReadWrite.All More details: https://learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-beta&tabs=http#permissions 3.Grant admin consent 4.Set up a logic app designer Here we selected 'When a http request is received' as a trigger. Action 1: HTTP – Get token This action is used to get token. This token will be used in the following actions. Method: POST URL: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded Body: client_id={MyClientID} &scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &client_secret={MyClientSecret} &grant_type=password &username={MyUsername}%40{myTenant}.com &password={MyPassword} Reference: https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth-ropc Action 2: HTTP – Get Pwd ID This action is used to get Password Method ID. Method: GET URL: https://graph.microsoft.com/beta/me/authentication/passwordMethods Content-type: application/json Reference: https://learn.microsoft.com/en-us/graph/api/authentication-list-passwordmethods?view=graph-rest-beta&tabs=http Action 3: HTTP – Update Pwd This action is used to update the password of a user. Method: POST URL: https://graph.microsoft.com/beta/users/{userObjectId | userPrincipalName}/authentication/passwordMethods/{passwordMethodId}/resetPassword Content-type: application/json Body: { "newPassword": "{myNewPassword}" } Reference: https://learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-beta&tabs=http#http-request In URI, we can use this Expression to get the value of passwordMethodId: body('HTTP_2_-_Get_Pwd_ID')['value'][0]['id'] Solution 2 1.Grant 4 permissions to application registration and grant admin consent User.ManageIdentities.All User.EnableDisableAccount.All User.ReadWrite.All Directory.ReadWrite.All Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http#permissions 2.Add role assignment ‘User Administrator’ to application registration In delegated access, the calling app must be assigned the Directory.AccessAsUser.All delegated permission on behalf of the signed-in user. In application-only access, the calling app must be assigned the User.ReadWrite.All application permission and at least the User Administrator Azure AD role. Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http 3.Set up a logic app designer Here we also selected 'When a http request is received' as a trigger. Action 1: HTTP – Get token This action is used to get token. This token will be used in the following actions. Method: POST URL: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token Content-type: application/x-www-form-urlencoded Body: client_id={MyClientID} &scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &client_secret={MyClientSecret} &grant_type=client_credentials Action 2: HTTP – Update Pwd This action is used to update the password of a user. Method: PATCH URL: https://graph.microsoft.com/v1.0/users/{userObjectId} Content-type: application/json Body: { "passwordProfile": { "forceChangePasswordNextSignIn": false, "password": "{myNewPassword}" } } Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http#example-3-update-the-passwordprofile-of-a-user-to-reset-their-password Result We can check user password update records on AAD audit logs on azure portal: AAD page -> Users -> AAD audit logs6.6KViews4likes4CommentsOpenTelemetry in Azure Logic Apps (Standard and Hybrid)
Why OpenTelemetry? As modern applications become more distributed and complex, robust observability is no longer optional—it is essential. Organizations need a consistent way to understand how workflows are performing, trace failures, and optimize end-to-end execution. OpenTelemetry provides a unified, vendor-agnostic framework for collecting telemetry data—logs, metrics, and traces—across different services and infrastructure layers. It simplifies monitoring and makes it easier to integrate with a variety of observability backends such as Azure Monitor, Grafana Tempo, Jaeger, and others. For Logic Apps—especially when deployed in hybrid or on-premises scenarios—OpenTelemetry is a powerful addition that elevates diagnostic capabilities beyond the default Application Insights telemetry. What is OpenTelemetry? OpenTelemetry (OTel) is an open-source observability framework under the Cloud Native Computing Foundation (CNCF) that provides a unified standard for generating, collecting, and exporting telemetry data such as logs, metrics, and traces. By abstracting away vendor-specific instrumentation and enabling interoperability across various tools and platforms, OpenTelemetry empowers developers and operators to gain deep visibility into distributed systems—regardless of the underlying infrastructure or language stack. In the context of Azure Logic Apps, OpenTelemetry support enables standardized, traceable telemetry that can integrate seamlessly with a wide range of observability solutions. This helps teams monitor, troubleshoot, and optimize workflows with more precision and flexibility. How to Configure from Visual Studio Code? To configure OpenTelemetry for a Logic App (Standard) project from Visual Studio Code: Locate the host.json file in the root of your Logic App project. Enable OpenTelemetry by adding "telemetryMode": "OpenTelemetry" at the root level of the file. { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "telemetryMode": "OpenTelemetry" } Define the following application settings in local.settings.json or within your CI/CD deployment pipeline: OTEL_EXPORTER_OTLP_ENDPOINT: The OTLP exporter endpoint URL where the telemetry data should be sent. OTEL_EXPORTER_OTLP_HEADERS (optional): A list of headers to apply to all outgoing data. This is commonly used to pass authentication keys or tokens to your observability backend. If your endpoint requires additional OpenTelemetry-related settings, include those in the application settings as well. Refer to the official OTLP Exporter Configuration documentation for details. How to Configure OpenTelemetry from Azure Portal? – Standard Logic Apps To enable OpenTelemetry support for a Standard Logic App hosted using either a Workflow Standard Plan or App Service Environment v3, follow the steps below: 1. Update the host.json File In the Azure portal, navigate to your Standard Logic App resource. In the left-hand menu, under Development Tools, select Advanced Tools > Go. This opens the Kudu console. In Kudu, from the Debug Console menu, select CMD, and navigate to: site > wwwroot Locate and open the host.json file in a text editor. Add the following configuration at the root level of the file to enable OpenTelemetry, then save and close the editor. { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "telemetryMode": "OpenTelemetry" } 2. Configure App Settings for Telemetry Export Still within your Logic App resource, go to Settings > Environment Variables and select App settings. Add the following key-value pairs: App Setting Description OTEL_EXPORTER_OTLP_ENDPOINT The OTLP (OpenTelemetry Protocol) endpoint URL where telemetry data will be exported. For example: https://otel.your-observability-platform.com OTEL_EXPORTER_OTLP_HEADERS (Optional) Any custom headers required by your telemetry backend, such as an Authorization token (e.g., Authorization=Bearer <key>). Select Apply to save the configuration. How to Configure OpenTelemetry from Azure Portal? – Hybrid Logic Apps To enable OpenTelemetry support for a Standard Logic App using the Hybrid hosting option, follow the steps below. This configuration enables telemetry collection and export from an on-premises deployment, using environment variables and local file system access. 1. Modify host.json on the SMB Share On your on-premises file share (SMB), navigate to the root directory of your Logic App project. Locate the host.json file. Add the following configuration to enable OpenTelemetry and save the file. { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "telemetryMode": "OpenTelemetry" } 2. Configure Environment Variables in Azure Portal Go to the Azure Portal and navigate to your Standard Logic App (Hybrid) resource. From the left-hand menu, select Settings > Containers, then click on Edit and deploy. In the Edit a container pane, select Environment variables, and then click Add to define the following: Name Source Value Description OTEL_EXPORTER_OTLP_ENDPOINT Manual <OTLP-endpoint-URL> The OTLP exporter endpoint URL where telemetry should be sent. Example: https://otel.yourbackend.com OTEL_EXPORTER_OTLP_HEADERS (Optional) Manual <OTLP-headers> Custom headers (e.g., Authorization=Bearer <token>) required by your observability backend. Once you've added all necessary settings, click Save. Example of Endpoint Configuration & How to Check Logs To export telemetry data using OpenTelemetry, configure the following environment variables in your Logic App’s application settings or container environment: Name Source Value Description OTEL_EXPORTER_OTLP_ENDPOINT Manual Entry https://otel.kloudmate.com:4318 The OTLP receiver endpoint for your observability backend. OTEL_EXPORTER_OTLP_HEADERS Manual Entry Authorization=<your-api-key> Used to authenticate requests to the telemetry backend. OTEL_EXPORTER_OTLP_PROTOCOL Manual Entry http/protobuf Protocol used for exporting telemetry (KloudMate supports gRPC/HTTP). In this example, we are using KloudMate as the destination for telemetry data. Once correctly configured, your Logic App will begin exporting telemetry data to KloudMate as shown below: Limitations and Troubleshooting Steps Current Limitations Supported trigger types for OpenTelemetry in Logic Apps are: HTTP Service Bus Event Hub Exporting metrics is not currently supported. Troubleshooting Steps No traces received: Validate OTEL_EXPORTER_OTLP_ENDPOINT URL and port availability. Ensure outbound traffic to observability backend is permitted. Authentication issues: Review and correct header values in OTEL_EXPORTER_OTLP_HEADERS. References Set up and view enhanced telemetry for Standard workflows - Azure Logic Apps | Microsoft Learn660Views9likes2CommentsIntroducing Agent in a Day
Looking for a jumpstart on how to build Agents? Confused by the plethora of options when building Agents? You have come to the right place. In May 2025, the Logic Apps team introduced Agent Loop which provides the ability to build Autonomous or Conversational agents in Logic Apps. This gives customers an easy-to-use agent building design surface, the ability to deploy your agent to a Azure and integrate with Azure AI Foundry. Azure represents an enterprise-ready platform that addresses your organizational requirements including VNET integration, Private Endpoint support, Managed Identity and gives you several scaling options. Sounds great? It does, but how can I get started? This is where Logic Apps Agent in a Day comes in. We have recently published a step-by-step lab guide that will help you build an IT Support Agent that uses ServiceNow as the IT Service Management tool. The guide is available here: https://aka.ms/la-agent-in-a-day and we have included an Instructor Slide Deck in Module 1. Agent in a Day represents a fantastic opportunity for customers to participate in hackathon-style contests where attendees learn how to build agents and then can apply them to their unique business use cases. For Partners, Agent in a Day represents a great way to engage your customers by building agents with them and uncovering new use cases. Have any feedback or ideas on how to make this better? Feel free to send me a DM and we can discuss further.Configure SQL Storage for Standard Logic Apps
Logic Apps uses Azure Storage by default to hold workflows, states and runtime data. However, now in preview, you can use SQL storage instead of Azure Storage for your logic apps workflow related transactions. Note that Azure Storage is still required and SQL is only an alternative for workflow transactions. Why Use SQL Storage? Benefit Description Portability SQL runs on VMs, PaaS, and containers—ideal for hybrid and multi-cloud setups. Control Predictable pricing based on usage. Reuse Assets Leverage SSMS, CLI, SDKs, and Azure Hybrid Benefits. Compliance Enterprise-grade backup, restore, failover, and redundancy options. When to Use SQL Storage Scenario Recommended Storage Need control over performance SQL On-premises workflows (Azure Arc) SQL Predictable cost modeling SQL Prefer SQL ecosystem SQL Reuse existing SQL environments SQL General-purpose or default use cases Azure Storage Configuration via Azure Portal Prerequisites: Azure Subscription Azure SQL Server and Database Azure SQL Setup: From your Azure SQL server, navigate to Security > Networking > Public Access > select "Selected networks". Scroll down and enable “Allow Azure services and resources…”. Navigate to Settings > Microsoft Entra ID > Ensure “Microsoft Entra authentication only” is unchecked. Note: this can be done during SQL server creation from the Networking tab. Standard Logic App Setup: From your Azure Portal, create a new Logic App (Standard). In the Storage tab, select SQL from the dropdown. Add your SQL connection string. Verification Tip: After deployment, check your logic apps environment variable 'Workflows.Sql.ConnectionString' to confirm the SQL DB name is reflected. Known Issues & Fixes Issue Fix Could not find a part of the path 'C:\home\site\wwwroot' Re-enable SQL authentication and verify path settings. SQL login error due to AAD-only authentication Navigate to Settings > Microsoft Entra ID > Ensure “Microsoft Entra authentication only” is unchecked. Final Thoughts SQL as a storage provider for Logic Apps opens up new possibilities for hybrid deployments, performance tuning, and cost predictability. While still in preview, it’s a promising option for teams already invested in the SQL ecosystem. If you are already using this as an alternative or think this would be useful, let us know in the comments below. Resources https://learn.microsoft.com/en-us/azure/logic-apps/set-up-sql-db-storage-single-tenant-standard-workflows https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-pricing?source=recommendations281Views0likes0CommentsAnnouncing the General Availability of the Azure Logic Apps Rules Engine
This week we announced our agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Now, we are announcing the General Availability of our Azure Logic Apps Rules Engine. A deterministic rules engine runtime based on the RETE algorithm that allows in-memory execution, prioritization, and reevaluation of business rules in Azure Logic Apps. The Azure Logic Apps Rules Engine is a decision management inference engine in Azure Logic Apps, which provides the capability for customers to build Standard workflows in Azure Logic Apps and integrate readable, declarative, and semantically rich rules that operate on multiple data sources. The native data sources available today for the Rules Engine are XML and .NET objects. These data sources are called "facts" and are used to construct rules from small building blocks of business logic or "rulesets". To create rules, you need the Rules Composer. It can be downloaded from the download center. The Rules Engine can also interact with the data exchanged by all the connectors available for Standard logic app resources. This design pattern promotes code reuse, design simplicity, and business logic modularity. Our Rules engine uses a VSCode experience to create Logic Apps projects with Rules engine support. For more information on how to create projects with Rules Engine, visit here. Now. What can I do with it? In a world of AI that essentially follows a probabilistic approach, rules engines are vital because they provide consistency, clarity, and compliance across different business goals. When you use rules with a workflow in Azure Logic Apps, you can define the logic, constraints, and policies that govern how to process, validate, and exchange data across systems, while you avoid AI hallucinations. Rules also help you make sure that applications follow the regulations and standards of their respective industries and markets. By using a rules engine, you can manage and update your workflow's business logic independently from the code and without having to alter your workflow. This approach helps you reduce the complexity and maintenance costs of your applications and increase their agility and scalability. From a technical perspective, the Azure Logic Apps rules engine allows you to do forward chaining or forward reasoning, in other words to do a re-evaluation of rules triggered by changes in the facts because of a rule’s execution. This is one of those scenarios where rules engine is unique; instead of writing complex code or creating complex “state-machine” workflows, the logic apps rules engine conducts this task with an instruction called “Update”. Getting started In the example below, I will show how to use the Logic Apps rules engine to ground an AI workflow loop. For this to scenario, I am adding a Rules Engine workflow, to an existing agent loop workflow, and use it to correct rates and provide a “cross-sell” recommendation. First, I need to deploy the workflow from VSCode to Azure. As the rules engine currently only supports XML and .NET Framework objects, I create an XSD schema (using Copilot if you don’t have an existing one) and use it with a “Compose XML with schema” action to create the XML fact that is needed. To obtain the returned data, I am using the “Parse XML with schema” action as well. After the logic app was deployed, I added it as a tool in the Logic Apps agent workflow loop, with a Call workflow in this logic app. I then pass the values that I need for parameters for the Rules engine to work. And I leave the rules engine return values empty. Then I updated my system prompt to indicate how I want the Rules engine to be used. The agent loop will find the right tool for the right job. Once the system prompt has been updated, I proceed to run the workflow with a payload. I have highlighted in red in the Agent chat, the guardrails imposed by the Rules Engine. Those rules have been used to make sure that the AI responses fall within the internal compliance and cross-sell company criteria. Some of the business rules can have different priorities and might require re-calculation for accuracy. The Logic Apps Rules Engine takes care of it without coding or adding complex business logic through additional workflows. Further adjustments to the rules using the Rules Composer will ground the agent’s results even more. What else can I do with it? You can use a Rules Engine in any context. In fact, decision management that falls under Intelligent business processes automation is growing in customers who want to provide flexibility, governance and compliance with their cloud workloads. Another well-known scenario is for BizTalk Migrations to Azure Logic Apps. For customers who have implemented the BizTalk BRE for decision management, content redirection, SWIFT or .NET framework. Demo Please watch the following short demo of this sample. How to use it If you are running the public preview version of the Rules engine, we recommend you to recreate your Rules engine project to get the latest rules engine nuget package loaded. If you cannot recreate your project, conduct the following steps: Update to csproj file by adding the rules engine nuget and updating the Webjob sdk nuget as follows: <PackageReference Include="Microsoft.Azure.Workflows.RulesEngine" Version="1.0.0" /> <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Sdk" Version="1.2.0" /> Update to the code making the rule explorer to be created as part of the constructor: public user_function_class(ILoggerFactory loggerFactory) { logger = loggerFactory.CreateLogger<user_function_class>(); this.ruleExplorer = new FileStoreRuleExplorer(loggerFactory); } The above rule explorer needs to be used for getting any rule set in the RunRules method as: var ruleSet = this.ruleExplorer.GetRuleSet(ruleSetName); Open a terminal and run dotnet restore Run dotnet build. Contact Us Have feedback or questions about the Rules Engine? We’d love to hear from you. Reply directly to this blog post or reach out to us through this form. Your input helps shape the future of Logic Apps and the rules engine.1.1KViews0likes0Comments