In the previous instance of this blog post series, we discussed how Logic App standard can be configured to scale to run high throughput workloads. In this blog, we showcase an application developed with Azure integration services components, aiming to streamline the processing of large number of orders received as a batch that needs to be processed within certain predetermined amount of time.
The orders are received as Service Bus messages, representing invoices securely stored in a blob container. To orchestrate this intricate workflow, the logic app workflows are designed to respond to each service bus message by initiating a series of actions. These actions encompass retrieving the relevant invoice from the blob storage, invoking various services like a rules engine to process the invoice, and ultimately transmitting the transformed data to the appropriate backend systems.
To meet scaling requirements, the solution divides the processing into two Logic Apps, each housing multiple workflows. This architectural decision allows for separating the scaling requirements into two apps to achieve faster scaling and efficient resource utilization.
We also showcase the levels of scalability that this application can achieve by providing various performance metrics. We hope you can use this as blueprint for building scalable applications with Logic Apps and other components in Azure integration services.
The Service Bus messages are generated by an ETL pipeline that extracts each invoice into a separate blob. Subsequently, a notification is sent to Logic Apps by dropping a message in the Service Bus queue. A daily influx of 1 million invoices is ingested through this process. In a workflow featuring a Service Bus trigger, the messages are promptly picked up and a child workflow is invoked to initiate the preprocessing phase. The child workflow orchestrates several data transformation steps and collaborates with other services to enrich each invoice with additional data. Once the preprocessing is complete, the modified invoice is saved back into a blob, and the ingestion system is promptly notified by dropping messages on another Service Bus queue.
A single service bus triggered workflow will pick up the messages coming from invoice preprocessing system and orchestrate calls to various backend systems.
|
Invoice-preprocessing Logic App |
Invoice Ingestion Logic App |
Number of workflows |
2 |
1 |
Triggers |
ServiceBus, Request (for nested workflow) |
ServiceBus |
Actions |
Receive(25), Preprocess(40) serviceBus, Sql, Blob, compose, query, variables, mq, Http, Functions. JavaScript
|
Invoice-ingestion(90) ServiceBus, Sql, Blob, Variable, Liquid, query, Http |
Number of storage accounts |
5 |
5 |
Prewarmed instances |
20 |
20 |
WS Plan |
WS3 |
WS3 |
Max Scale settings |
100 |
100 |
Execution Delay
Instance Count
Execution Delay
Instance count
|
Invoice-preprocessing workflow |
Invoice Ingestion workflow |
Total number of invoices processed |
1,000,000 |
1,000,000 |
Total processing time |
4.5 hours |
4.5 hours |
Triggers |
· 50k/min peak SB message read · 10k/min sustained SB read · 1M SB messages received in about 40min
|
10k/min sustained SB message read |
Actions |
· 850K actions/min peak execution rate for receiving workflow and 150k actions/min sustained rate for preprocessing workflow
· Total actions executed: 65M |
· 400K actions/min sustained execution rate
· Total actions executed: 90M |
Jobs |
· 1M/min job rate at peak · 200K/min job rate sustain |
· 500K/min sustained job rate |
Execution delay |
95th Pc Increased up to 400s during scale-out and came back below 200ms at sustained load |
95th Pc Increased up to 60s during scale-out and came back below 20ms at sustained load |
Scale out |
Scaling instance count from 20 to 80 took about 40mins |
Scaling instance count from 20 to 100 took about 40mins |
|
|
|
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.