How to improve performance of azure timer trigger function

Copper Contributor

I have an azure function app(hosted with consumption plan) which is a timer trigger app that talks to aws sqs for events, reads data from aws s3 bucket and ingests that into log analytics custom tables.

What I noticed during a performance run is that, when events/second increases to say 2500+ events/sec, (anything lower than that works just fine), the function app struggles to keep up thus leading to events accumulating in s3 bucket.

The function app is simple in its design, 

  • Runs every 5mins
  • Reads sqs/s3 for data, processes files and ingests into custom tables using logs ingestion api.
  • With each execution, it accumulates enough events in memory and uploads using api once limit of 2000 is hit, if not, before it terminates it uploads whatever events have been collected so far.
  • So I am not invoking POST api too often either, I am not hitting api limits too (so no 429s)

From logs ingestion point of view, time taken for ingestion time transformations is almost negligible.

I can think of going to premium plan, but I want to know if this is the max that consumption plan can achieve?

Any ideas how to improve this at scale?

 

2 Replies
Improving the performance of an Azure Timer Trigger Function, especially when it's handling a high volume of events, requires a strategic approach that optimizes various aspects of the application. Here are several strategies you can consider to enhance the performance of your Azure Function:

1. Scale-Out and Concurrency: The Azure Functions Consumption Plan automatically scales out, but there are limits. Ensure your function can efficiently handle parallel executions. You might want to review the 'maxConcurrentCalls' and 'maxOutstandingRequests' settings if you're not already doing so.

2. Batch Processing: You're already batching requests before sending them to the Log Analytics custom tables, which is great. Consider if there's room to optimize the batch size or the batching logic itself to ensure you're balancing between memory usage and network calls efficiently.

3. Efficient Code: Review your function's code to ensure it's as efficient as possible. Look for any potential bottlenecks or inefficient loops/operations, especially those that might not scale linearly with the number of events.

4. Connection Reuse: Ensure you're reusing connections wherever possible, particularly in the context of AWS SQS and S3. Creating new connections for each request can significantly impact performance.

5. Optimize Memory Usage: High memory usage can lead to increased garbage collection, which can impact performance. Ensure your function is using memory efficiently, particularly with respect to how it accumulates events in memory.

6. Premium Plan Consideration: Moving to a Premium plan can provide enhanced performance due to better compute options and the ability to keep instances warm, reducing cold start times. However, this should be a consideration after optimizing within the Consumption plan as much as possible.

7. Monitoring and Diagnostics: Utilize Azure Monitor and Application Insights to get detailed insights into your function's performance. Look for patterns that indicate performance degradation and focus on those areas for optimization.

8. Parallel Processing: If your logic allows, consider processing multiple files or batches in parallel. Azure Functions supports asynchronous execution, which can be leveraged to process multiple tasks concurrently.

9. Function App Splitting: If there's a logical separation in the processing steps, consider splitting the function app into multiple smaller functions. This can allow for more granular scaling and can isolate performance bottlenecks.

10. Networking Considerations: Since your Azure Function interacts with AWS services, network latency can be a factor. Evaluate the network path and see if there are optimizations, such as using Azure ExpressRoute or optimizing how data is transferred between Azure and AWS.

11. Cold Start Mitigation: In the Consumption Plan, cold starts can affect performance, especially under scaling scenarios. While this is less of an issue in the Premium Plan due to pre-warmed instances, in the Consumption Plan, optimizing for cold start times is crucial.

Before moving to the Premium Plan, exhaust the optimization possibilities within the Consumption Plan. The switch should be considered when you're certain that the limitations are not due to the application design but due to the inherent constraints of the Consumption plan. Often, significant improvements can be achieved through optimization without incurring the additional cost of a higher-tier plan.

@Kugan Nadaraja Thank you for your response.