This blog post is a collaboration between Nikhil Sira (Software Engineer), Rohitha Hewawasam (Principal Software Engineering Manager) and Kent Weare (Principal Product Manager) from the Azure Logic Apps team.
This functionality has since reached General Availability. Please see announcement here:
We have recently published an update to how we emit telemetry for Application Insights in Azure Logic Apps (Standard). This new update is currently in Public Preview and is an opt-in feature for customers. It can be enabled without introducing risk to your telemetry. For customers who choose not to opt in, they will continue to emit telemetry using the existing method.
Based upon customer feedback, we had opportunities to improve the observability experience for Azure Logic Apps (Standard) by:
General
Requests table
Traces table
Exceptions
Filtering (at source)
Metrics
For additional details on all these capabilities, please refer to the Scenario Deep dive below.
To use the new Application Insights enhancements, your Logic App (Standard) project needs to be on the Functions V4 runtime. This automatically enabled from within the Azure Portal when you create a new Azure Logic App (Standard) instance or by modifying the Application Settings. Functions V4 runtime support is also available for VS Code based projects. For additional details on Functions V4 runtime support for Azure Logic Apps, please refer to the following blog post.
As mentioned in the introduction, this new version of Application Insights support is an opt-in experience for customers by modifying the host.json file of your Logic Apps (Standard) project.
To modify your host.json file from the Azure portal, please perform the following steps:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": " [1, 2.00) "
},
"extensions": {
"workflow": {
"Settings": {
"Runtime.ApplicationInsightTelemetryVersion": "v2"
}
}
}
}
Enabling Application Insights can also be configured from within a Logic Apps project in VS Code by editing the host.json file and including the following information.
Note: The configuration that was just discussed uses the default verbosity level of Information. Please see the Filtering section below for more options on how to filter our specific events at source.
This is the primary table where you will find events for Azure Logic Apps (Standard). To highlight all the various fields tracked within this table, let’s use the following workflow to generate data. In addition, we will add a Custom Tracking id in our trigger which will pull an orderId from the body of the incoming message.
To demonstrate where tracked properties are stored, let’s set up a Tracked Property in our Compose action by clicking on Settings and then adding an item called solutionName with the name of our integration solution.
After we run this workflow and wait a couple minutes, we will see logs appear in our Application Insights instance by running the following query:
requests
| sort by timestamp desc
| take 10
Let’s further explore our results. Please refer to the numbers in the following image that correspond with additional context below.
Next, let’s explore the record that represents our trigger.
Let’s now move to our Compose action to discover what data is being captured there.
In addition to all the details we just went through, some common fields you may discover within the logs include:
Operation_ID: This represents the id for the component that just executed. Should there be exceptions or dependencies, this value should transcend those tables to allow us to link these executions.
Category: For Logic App Executions, we can expect this value to always be Workflow.Operations.Triggers or Workflow.Operations.Actions As we saw at the beginning of this post, we can filter the verbosity of logs based upon this category.
With the basics out of the way, here are a couple queries that you can use to query using Client Tracking Ids and Tracked Properties.
requests
| where operation_Name == "Request-Response-Workflow"
| extend correlation = todynamic(tostring(customDimensions.correlation))
| where correlation.clientTrackingId == "123456"
requests
| where operation_Name == "Request-Response-Workflow" and customDimensions has "trackedProperties"
| extend trackedProperties = todynamic(tostring(customDimensions.trackedProperties))
| where trackedProperties.solutionName== "kewear-LA-AppInsights"
Another feature we have included in the Requests table is the ability to track retries. To demonstrate this, we can create a workflow that will call a URL that doesn’t resolve. We will also implement a Retry Policy within the workflow which will force retries. In the example below we will set this policy to be based upon a Fixed Interval that will retry 3 times, once per 60 seconds.
After executing our workflow, if we explore our Requests table once again, we will discover the following:
We now capture the name of the connector that participated in the trigger or action event that is captured within the Requests table.
requests
| where customDimensions.Category == "Workflow.Operations.Triggers" and customDimensions.triggerType =="ApiConnectionWebhook" and customDimensions.apiName =="commondataservice"
requests
| where customDimensions.actionType == "ApiConnection" and customDimensions.apiName == "office365"
For both triggers and actions, we will differentiate between the types of connections that exist. You may see different behavior in the actionType and triggerType fields based upon whether it is an API Connection, API Webhook Connection, built in (like HTTP Request), or ServiceProvider.
For example, if we want to identify requests where the HTTP Request trigger has been used, we can run the following query:
requests
| where customDimensions.triggerType == "Request"
Similarly, we can find all the Compose actions that that exist by issuing the following query:
requests
|where customDimensions.actionType == "Compose"
Since we now differentiate between trigger and action events, we can query for just a subset of events. For example, we may want to query for all actions for a specific workflow and can do so using the following query:
requests
| where customDimensions.Category == "Workflow.Operations.Actions" and operation_Name == "Request-Response-Workflow"
Similarly, we can also query for just trigger events for a specific workflow by executing the following query:
requests
| where customDimensions.Category == "Workflow.Operations.Triggers" and operation_Name == "Request-Response-Workflow"
The run id is a powerful concept in Azure Logic Apps. Whenever you look at the Run History for a workflow execution, you are referencing a Run Id to see all the inputs and outputs for a workflow execution. If we want to perform a similar query against Application Insights telemetry, we can issue the following query and provide our Run Id:
requests
| where operation_Id =="08585287554177334956853859655CU00"
Within the Traces table we can discover more verbose information about workflow run start and end events. This information will be represented as two distinct events since a logic app workflow execution can be long running.
traces
| where customDimensions.Category == "Workflow.Operations.Runs"
We can also further filter based upon our Run Id by using the following query:
traces
|where customDimensions.Category == "Workflow.Operations.Runs"
and operation_Id == "08585287571846573488078100997CU00"
Batch is a special capability found in Azure Logic Apps Consumption and Standard. To learn how to implement in Azure Logic Apps (Standard), please review the following blog post. As part of the telemetry payload, we have a category property that includes a value of Workflow.Operations.Batch whenever there is a send or receive Batch event. We can subsequently write the following query that will retrieve these events for us.
traces
| where customDimensions.Category == "Workflow.Operations.Batch"
We have made some additional investments to this table as well. Let’s run through an example of an exception to illustrate this. We will use a similar example to what we showed previously, the difference is that we will generate an exception within the Compose action by dividing by zero.
Let’s explore the type of information that is captured within this exception record:
Dependency events are emitted when you have one resource calling another when both of those resources are using Application Insights. In the context of Azure Logic Apps, it's typically a service calling another service by using HTTP, a database, or a file system. Application Insights measures the duration of dependency calls and whether it's failing or not, along with information like the name of the dependency. You can investigate specific dependency calls and correlate them to requests and exceptions.
To demonstrate how dependencies work, we will create a parent workflow that calls a child workflow over HTTP.
We can run the following query to view the linkages between our parent workflow and the dependency record created for the child workflow.
union requests, dependencies
| where operation_Id contains "<runId>"
When we look at the results we will discover the following:
Another benefit of this operation_Id is providing a linkage between our parent workflow and our child workflow is the Application Map that is found in Application Insights. This linkage allows for the visualization of our parent workflow calling our child workflow.
There are generally two ways to filter. You can filter by writing queries as we have previously discussed on this page. In addition, we can also filter at source by specifying criteria that will be evaluated prior to events being emitted. When specific filters are applied at source, we can reduce the amount of storage required and subsequently reduce our operational costs.
A key attribute that determines filtering is the Category attribute. You will find this attribute within the customDimensions node of a request or trace record.
For Requests, we have the following attributes that we can differentiate between and associate different verbosity levels:
Category |
Purpose |
Workflow.Operations.Triggers |
This category represents a request record for a trigger event |
Workflow.Operations.Actions |
This category represents a request record for an action event |
For each of these categories we can independently set the verbosity by specifying a logLevel within our host.json file. For example, if we are only interested in errors for actions and trigger events, we can specify the following configuration:
{
"logging": {
"logLevel": {
"Workflow.Operations.Actions": "Error",
"Workflow.Operations.Triggers": "Error",
}
}
}
For additional information about logLevels, please refer to the following document.
When it comes to the traces table, we also have control over how much data is emitted. This includes information related to:
Category |
Purpose |
Workflow.Operations.Runs |
Start and end events that represent a workflow’s execution |
Workflow.Operations.Batch |
Batch send and receive events |
Workflow.Host |
Internal checks for background services |
Workflow.Jobs |
Internal events related to job processing |
Workflow.Runtime |
Internal logging events |
Below are some host.json examples of how you may choose to enable different verbosity levels for trace events:
{
"logging": {
"logLevel": {
"Workflow.Host": "Warning",
"Workflow.Jobs": "Warning",
"Workflow.Runtime": "Warning",
}
}
}
{
"logging": {
"logLevel": {
"default": "Warning",
"Workflow.Operations.Runs": "Information",
"Workflow.Operations.Actions": "Information",
"Workflow.Operations.Triggers": "Information",
}
}
}
Note: If you do not specify any logLevels, the default of Information will be used.
Through the investments that were made in enhancing our Application Insights schema, we are now able to gain additional insights from a Metrics perspective. From your Application Insights instance, select Metrics from the left navigation menu. From there, select your Application Insights instance as your Scope and then select workflow.operations as your Metric Namespace. From there, you can select the Metric that you are interested in such as Runs Completed and an Aggregation like Count or Avg. Once you have configured your filters, you will see a chart that represents workflow executions for your Application Insights instance.
If you would like to filter based upon a specific workflow, you can do so by using filters. Filters require multi-dimensional metrics to be enabled on your Application Insights instance. Once that is enabled, subsequent events can be filtered. With multi-dimensional metrics enabled, we can now click on the Add filter button and then select Workflow from the Property dropdown, followed by = as our Operator and then select the appropriate workflow(s).
Using filtering will allow you to target a subset of the overall events that are captured in Application Insights.
As demonstrated, we have simplified the experience of querying, accessing, and storing observability data. This includes providing customers with more control on what events and their verbosity are emitted, allowing customers to reduce storage costs. We also ensure consistency when emitting events by using a consistent id/name, resource, and correlation details.
If you would like to provide feedback on this feature, please leave a comment below and we can connect to further discuss.
To see a view walk-through of this content, please check out the following video.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.