Azure Database for PostgreSQL allows you to configure and access PostgreSQL server logs. There are various scenarios to use the logs, such as identifying errors, troubleshoot suboptimal performance, and repair configuration issues. Azure PostgreSQL logs are available on every node of a flexible server. You can ship logs to a storage server, or to log analytics service. This post discusses how you can configure Azure Database for PostgreSQL logs and shares methods to access these log files using log analytics service.
PostgreSQL generates event logs that contain useful information. SQL query failures, failed login attempts, and deadlocks are captured in the logs by default. These error messages can help identify various application issues. For example, if you converted a legacy application from Oracle to PostgreSQL, some queries may not convert correctly to PostgreSQL syntax. These incorrectly formatted queries generate error messages in the logs, which can help identify the problematic application code.
In addition to the default logging, you can modify PostgreSQL logging parameters to capture beneficial information in identifying and solving issues such as deficient performance and security audits. These logging parameters help capture information such as connections and disconnections, schema modification queries, slow queries with the duration, queries taking time because they are waiting of locks, queries consuming temporary disk storage, and the backend autovacuum process consuming resources.
This log information can help troubleshoot potential performance and auditing issues while using the database. This post provides more details about enabling this logging and its benefits.
Step 1. Logging Parameters
The Azure PostgreSQL server parameter includes several logging server parameters that control the PostgreSQL logging behavior. You can configure these parameters as per your application requirements. For more information about server parameters, please refer When To Log and What To Log sections of the Postgres documentation.
Step 2. Configure diagnostic settings
Once configured, the logs can be shipped to a storage server, or to log analytics service. This can be configured in diagnostic settings.
Step 3. Accessing logs via Log Analytics
The way you access the logs depends on which endpoint you choose. For this article we will use Log Analytics.
For Log Analytics, logs are sent to the workspace you selected. The Postgres logs use the AzureDiagnostics collection mode, so they can be queried from the AzureDiagnostics table. Use following KQL query to get started.
// Find Errors
// Search for errors in the last 6 hours.
// To create an alert for this query, click '+ New alert rule'
| where Resource == "VARUND-PG1"
| where Category == "PostgreSQLLogs"
| where TimeGenerated > ago(6h)
Verbose logging tends to cause performance issues, especially if you log all statements. This is when your server is doing the additional work for logging each SQL statement. Our recommendation is to be very careful when enabling statement-logging options.
log_statement = 'off' | 'ddl' | 'mod' | 'all'
It may be tempting to set this to “all” to capture every SQL statement running on the server, but this may not always be a good idea in reality. On a busy production instance logging thousands of SELECT per minutes, this can generate a huge amount of logging information. If you need to set the parameters to these values, make sure you are only doing so for a short period of time for troubleshooting purposes, and closely monitor the storage space, throughout. For a complete list of statement level logging parameters, please refer the Postgres documentation
Azure Database for PostgreSQL logs provides useful information about database activity, that can help in performance tuning and troubleshooting. While a perfect logging configuration will be a matter of trial and error, what I have explained here is how you can configure logs, set up logging destination and query logs via Azure log analytics.