Improve Performance of Read Intensive Workloads on Azure DB for PostgreSQL using Query Caching
Published Jul 08 2019 04:22 PM 6,448 Views

 

When building cloud native applications using managed services, latency between the application and database layer can add to the query response times impacting overall application performance. The impact is visible in latency sensitive application like a gaming workload or digital payment app and can potentially impact the end user experience for customers. 

 

While leveraging managed cloud services for building your applications, one of the challenge (or blessing) is, the placement of application VMs, underlying database servers and nodes within a region is abstracted from you so if the application server and database node are not collocated, the network latency between the application and the database node can vary. As a result, if you are running your benchmark tests or comparing the performance of the application running on-premises vs that with Azure database for PostgreSQL service, you might see difference in the query response times. 

 

For data modification queries, it is therefore recommended to minimize singleton transactions and leverage batching with large batch size to minimize the round trip between the application and database server. For query performance, one of the ways to improve query response time and latency is by leveraging client-side query caching especially when you are designing and architecting for a mission critical application with low latency requirements. Query caching is one of the common techniques leveraged by developers to improve query response times. The advantages of client-side query caching are 

 

  • It cut downs the network latency significantly between the application and database layer and avoids SQL parsing, compilation and execution overhead thereby improves application performance significantly. 
  • It reduces overhead and pressure on the database server. The database doesn’t experience memory pressure caused by caching frequently accessed data buffers from complex queries joining multiple tables running large joins 
  • It reduces the connection churn on the database server which also adds significant overhead on PostgreSQL server. For more details on connection overhead with PostgreSQL, read our blog post here. 

The disadvantages of the caching is, the data can be stale and needs to be refreshed frequently so for OLTP or highly transactional workload with frequent data modification, query caching can be an overhead and is not recommended. For read heavy workloads, if the data is less frequently updated especially for data warehousing or ETL scenarios, the benefits of query caching can outweigh the overhead of maintaining an external query cache.  

 

Currently, Redis cache or Memcached are the popular external query caching solutions. In addition, with PostgreSQL, there are community developed open source proxy solutions like Pgpool-II or proprietary solutions like Heimdall Data which can provide query result-set caching to improve overall performance of the application.  

 

To give you some stats to back abovementioned theory, we ran some tests by provisioning a Linux VM on Azure to run pgbench with Pgpool-II configured for query caching against Azure Database for PostgreSQL service. 

 

Annotation 2019-07-08 155346.png

With No Query Caching

Let’s first assume no caching. For testing, we use a simple query and an aggregation query as shown below. In all cases, queries are executed over an already existing connection, so times never include actual connection establishment time. 

 

Simple Query 

explain (analyze, buffers, timing) 
select * from pgbench_accounts where aid = 123456; 

                                   QUERY PLAN  

------------------------------------------------------------------------------------ 

Index Scan using pgbench_accounts_pkey on pgbench_accounts 
   Index Cond: (aid = 123456) 
   Buffers: shared hit=4 
Planning Time: 0.082 ms 
Execution Time: 0.074 ms 

Time: 2.335 ms 

 

For simple queries, response time for simple query is not dominated by query execution time but by latency between client and server .  

 

Aggregation Query 

explain (analyze, buffers, timing) 
select bid, sum(abalance) from pgbench_accounts group by bid; 

Planning time: 0.119 ms 
Execution time: 274.357 ms 

Time: 286.034 ms 

 

With Query Cache

With query caching we cannot only save query execution but fully eliminate communication from client to the PG server for a cache hit. 

 

Non-cacheable queries

For queries that can’t be cached, as before, caching adds some overhead as every query needs to be analyzed and checked if it is a cacheable query or not.  

 

explain (analyze, buffers, timing) 
select * from pgbench_accounts where aid = 123456; 

                                   QUERY PLAN  

------------------------------------------------------------------------------------ 

Index Scan using pgbench_accounts_pkey on pgbench_accounts 
   Index Cond: (aid = 123456) 
   Buffers: shared hit=4 
Planning Time: 0.078 ms 
Execution Time: 0.036 ms 

Time: 2.616 ms 

 

We can see that using query cache increased response time from ~2.3ms to ~2.6ms. (the explain option makes the query non-cacheable).  

 

Cacheable queries

On first execution of a cacheable query overhead is slightly higher as above; query and result needs to be written to cache as an additional step. 

 

select * from pgbench_accounts where aid = 123456; 
Time: 2.799 ms 

 

select bid, sum(abalance) from pgbench_accounts group by bid; 
Time: 291.213 ms

 

Subsequent calls now benefit from query cache. 

 
select * from pgbench_accounts where aid = 123456; 
Time: 0.304 ms 

 

select bid, sum(abalance) from pgbench_accounts group by bid; 
Time: 0.284 ms 

 

In the cached scenario the query result can be retrieved locally, without having to send the query to the remote server. The performance improvement is substantial in this case, even for the simple query. 

 

Conclusion

Summarizing the results and observations, with query caching, the query response time is improved by 86% with around ~12% degradation in execution of non-cacheable queries and ~20% degradation in first time execution of cacheable queries which is an acceptable trade-off if the data is not modified or changed frequently. 

Annotation 2019-07-08 160439.png

To conclude, for read heavy workload with infrequent data modification, query caching can help improve the response time and latency, significantly outweighing the overhead to maintain external cache. Pgpool (open source) and Heimdall Data (proprietary) both already support the query result caching by intercepting connections between client/app and PG server in addition to connection pooling and load balancing benefits which can further help boost application performance with read replicas.

 

Next steps

References

 

Benjamin Satzger

Principal Software Engineer, Microsoft

 

Parikshit Savjani

Senior Program Manager, Microsoft

Version history
Last update:
‎Aug 15 2019 09:57 AM
Updated by: