How to configure Postgres log settings
Published Mar 09 2020 09:16 AM 30.2K Views
Microsoft

The PostgreSQL logs can seem like a mysterious landscape you’re thrown into when something goes wrong with your app. You look through the logs wondering, ‘What does Postgres log anyway? What kind of errors? Will it tell me about long-held locks?’ etc. But when something goes wrong with your app is the wrong time to be configuring Postgres logs. Set your logs up well ahead of time to get the most out of them.

 

The key with database logs in Postgres is having enough to help you be aware of and diagnose issues, but not so verbose that they drag down your Postgres server’s performance. To achieve this balance, it’s important to understand the Postgres log parameters.

 

If you’re running your own Postgres installation, configure the logging settings in the postgresql.conf file or by using ALTER SYSTEM. If you are using a managed Postgres database service (like this one), its documentation will provide guidance on how to configure parameters.

 

Default Postgres log settings that can help you

 

log_min_messages = WARNING

The log_min_messages setting is the volume control for Postgres’s system messages. The default means you’ll see logs with severity WARNING or higher (ERROR, LOG, FATAL, PANIC). See this table in the PostgreSQL docs for an explanation of what each severity means. Some examples of logs generated due to this parameter:

 

ERROR: out of memory
LOG:  parameter "log_lock_waits" changed to "ON"
FATAL:  terminating connection due to conflict with recovery

 

WARNING is a good default setting to keep for log_min_messages because it provides useful information that isn’t costly to have. You don’t want to go lower than ERROR because you’ll miss (you guessed it) errors.

 

log_min_error_statement = ERROR

log_min_error_statement is the SQL statement version of log_min_messages. That means it controls the logging level for statements associated with errors. In the example below, the first log line is emitted depending on log_min_messages, and the paired second log line is emitted because of log_min_error_statement.

 

ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT * FROM mytable WHERE id = 1 FOR UPDATE;

 

You can see that without the log_min_error_statement functionality, it would be hard to know what got aborted.

 

Because the logs produced by these two parameters are paired in this way, you may prefer to match their settings. For example, setting log_min_error_statement to WARNING as well. That way every system log that has a corresponding SQL statement will be logged with the statement.

 

log_error_verbosity = DEFAULT

In Postgres, log_error_verbosity has three possible settings. DEFAULT is a good middle ground between TERSE and VERBOSE. DEFAULT adds detail, hint, query, and context information to the log if they are available for that error. For example,

 

ERROR: cannot drop table customers because other objects depend on it
DETAIL: constraint orders_fkey on table orders depends on table customers
HINT: Use DROP ... CASCADE to drop the dependent objects too.

 

For general use, stick with DEFAULT. TERSE doesn’t provide guidance on what to do about an error. Turn to VERBOSE only if you plan to actually look up the Postgres source code.

 

The following two parameters, log_connections and log_checkpoints are OFF by default in standard Postgres deployments. They are ON by default in Azure Database for PostgreSQL because we believe it’s generally helpful to have this information.

 

log_connections = ON

An important feature of log_connections is that it will log a connection attempt as one line and make a separate log line for successful connections. This is handy for identifying bad connection attempts and where they originate: log_connections will show you the IP address of the connection attempt.

 

LOG: connection received: host=122.220.2.133 port=5432 pid=655912
LOG: connection authorized: user=bob database=postgres SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
 
LOG: connection received: host=122.220.2.133 port=5432 pid=655920
FATAL: password authentication failed for user "bob"
DETAIL: Password does not match for user "bob".
Connection matched pg_hba.conf line 7: "host all all 122.220.2.133/32 password"
 
LOG: connection received: host=24.2.2.134 port=5432 pid=655908
FATAL: no pg_hba.conf entry for host "24.2.2.134", user "alice", database "postgres", SSL on

 

By the way, pg_hba.conf is Postgres’s client authentication control file.

 

log_checkpoints = ON

Checkpoints are an important part of a Postgres server’s lifecycle. They represent the moment when the Write Ahead Log (WAL) is synchronized to the Postgres data directory. Checkpoints are I/O intensive events, but recent checkpoints also help with server restart times. log_checkpoints gives you insight into when checkpoints start, how long they take, and some statistics like buffers written.

 

LOG: checkpoint starting: time
LOG: checkpoint complete (108): wrote 4122 buffers (1.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=269.657 s, sync=0.000 s, total=269.688 s; sync files=48, longest=0.002 s, average=0.000 s; distance=126856 kB, estimate=126856 kB

 

 

Postgres log settings you might be interested in adjusting and why

 

log_disconnections = ON

(Default is OFF). As you might guess, log_disconnections is the counterpart to log_connections. If you need to know when connections end and how long each connection lasted, consider turning this ON.

 

LOG: disconnection: session time: 0:05:42.621 user=alice database=postgres host=122.220.2.133 port=5432

 

 

log_lock_waits = ON

(Default is OFF). If you suspect that lock waits are impacting the performance of your application on Postgres, or you’ve had trouble with locks before and want to keep an eye on them, log_lock_waits is the parameter for you.

 

A log is generated if a session waited longer than deadlock_timeout for a lock. The two parameters (deadlock_timeout and log_lock_waits) work in tandem. To have effective lock logging, don’t set deadlock_timeout too low—your logs will start getting verbose and the important waits won’t stand out. The default for deadlock_timeout is 1s.

 

LOG: process 667740 still waiting for ShareLock on transaction 599 after 1000.013 ms
DETAIL: Process holding the lock: 658912. Wait queue: 667740.
CONTEXT: while locking tuple (0,2) in relation "mytable"
STATEMENT: SELECT * FROM mytable WHERE id = 2 FOR UPDATE;
LOG: process 667740 acquired ShareLock on transaction 599 after 30671.632 ms

 

Note that actual deadlocks are logged independently of the log_lock_waits parameter, because they are classified as a database error. You’ll see them if log_min_messages is set to at least ERROR.

 

ERROR: deadlock detected
DETAIL: Process 658912 waits for ShareLock on transaction 597; blocked by process 660260.
               Process 660260 waits for ShareLock on transaction 596; blocked by process 658912.
               Process 658912: SELECT * FROM mytable WHERE id = 2 FOR UPDATE;
               Process 660260: SELECT * FROM mytable WHERE id = 1 FOR UPDATE;
HINT: See server log for query details.
CONTEXT: while locking tuple (0,2) in relation "mytable"
STATEMENT: SELECT * FROM mytable WHERE id = 2 FOR UPDATE;

 

 

log_line_prefix = %m user=%u db=%d pid=%p:

(Postgres default is %m [%p] , time and process ID; Azure Postgres default is %t-%c-, time and session ID)

This is one of my favorite logging parameters because it has a positive impact on all your Postgres logs. log_line_prefix tells Postgres to add some standard information to the beginning of each log line. The suggested value of %m user=%u db=%d pid=%p: would provide the timestamp, user name, database name, and process id for each log. That way when you see a log you can quickly narrow down on who (user) and where (database). Logging the process or session ID allows you to correlate associated events.

 

2020-03-05 23:43:07.793 UTC user=bob db=postgres pid=670232: LOG: parameter "work_mem" changed to "8192"
2020-04-05 02:05:07.423 UTC user=alice db=postgres pid=879234: ERROR: permission denied for table customers

 

In other examples throughout the blog, the log line is showed without the prefix for brevity.

 

Be careful with statement logging

Verbose logging tends to cause performance issues, especially if you log all statements. That’s your server doing additional work for each and every SQL statement. The extent of impact varies depending on the logging pipeline and its storage/IO characteristics.

In general, be very careful of statement log settings. Most of them are Booleans (everything or nothing). They are all off by default.

 

log_statement

log_statement_stats

log_duration

debug_print_parse, debug_print_plan, debug_print_rewritten

 

(You can find more information on these parameters in the Postgres documentation).

 

Azure Database for PostgreSQL users, you get the benefit of Query Store. Query Store is a more performant way of getting a comprehensive historical view of statistics for all queries.

 

If you’re going to use statement logging because you need historical information on your Postgres queries:

  1. Use only log_min_duration_statement
    The log_min_duration_statement setting enables you to limit the volume of statements that are logged. If you set log_min_duration_statement to 900ms, for example, it will only log the queries that exceed this time. Of course, this is only useful as a volume control if the value you pick is a reasonable threshold for what a ‘slow’ query is in your workload.

  2. Measure the performance impact of statement logging at different verbosity levels, so you’re clear on what’s an acceptable tradeoff for you.

 

If you were interested in log_statement = ALL for auditing purposes, consider using pgAudit instead. pgAudit provides rich controls for configuring audit logging. You can learn more about pgAudit in this blog post.

 

Other settings of note to Azure Database for PostgreSQL users

 

logging_collector

logging_collector controls whether Postgres runs a separate process for writing out logs, which is highly recommended unless you use syslog.

For Azure Postgres, logging_collector is used to control whether logs go to the short-term storage location (ON), or do not go there at all (OFF). Note that this is independent of your Azure diagnostic settings, which offer longer-term retention. In Azure Postgres, you may see performance benefits from setting logging_collector to OFF, and using Azure diagnostic settings alone.

 

log_retention_days

log_retention_days is an Azure Postgres specific parameter that controls the number of days (3 to 7) that logs are kept for in our Azure Postgres short-term log storage. The default is 3 days (but retention is also limited by the fixed size of this log store). For longer term retention and larger storage, consider using Azure diagnostic settings. Visit the Azure Postgres logging documentation for more information.

 

Go forth and log

Configuring good database logs is probably not a glamorous part of your job description. Yet when something goes wrong the first place you turn to is your logs. Setting up logs that work is part of good application hygiene. It may not always spark joy, but in the long run it makes your apps healthier and easier to diagnose.

 

If you have any feedback for us or questions on Postgres logs, drop us an email @AskAzureDBforPostgreSQL. Till then, go forth and log.

 

For an exhaustive list of Postgres logging parameters, their definitions, and any specifics for the Postgres version you’re running, visit the PostgreSQL documentation.

 

Version history
Last update:
‎Mar 08 2020 04:30 PM
Updated by: