GIN (Generalized Inverted Index) indexes are a cornerstone of PostgreSQL when working with JSONB, arrays, and full-text search. They provide excellent read performance, but their write behavior—especially under sustained insert workloads—can vary dramatically depending on how data is primarily written and how GIN index maintenance is configured.
One often-overlooked configuration in this space is the interaction between the fastupdate and gin_pending_list_limit server parameters. While these settings do not directly impact query performance, they play a critical role in insert throughput, CPU usage, and worst-case write latency for GIN-indexed tables.
This post explains how the GIN pending list works, how gin_pending_list_limit governs its behavior, and why choosing the right configuration can make or break write performance on large datasets and write-intensive database operations.
Let us now go through the 2 server parameters discussed above.
Fastupdate
The fastupdate storage parameter controls how PostgreSQL handles write to a GIN index:
When fastupdate = ON (default)
GIN uses an in-memory buffer containing the pending list of new index entries before flushing them into the main index structure.
gin_pending_list_limit
The gin_pending_list_limit parameter in PostgreSQL controls the maximum size of the pending list for a GIN (Generalized Inverted Index) before it's flushed to the main index structure. This setting can significantly affect insert performance and index maintenance behavior. By default, this parameter is set to 4 MB and by default.
- New GIN entries are first written to the pending list.
- This makes inserts much faster because PostgreSQL batches write's and avoids expensive GIN maintenance on each individual insert.
- The pending list is later merged into the main GIN index by:
- Autovacuum or a VACUUM operation or
- When the pending list exceeds gin_pending_list_limit.
- During heavy insert workloads, these merges can cause latency spikes
In short, the pending list makes writes cheap—until the cleanup happens.
How fastupdate Interacts With gin_pending_list_limit
Together, these parameters decide how much index maintenance work each insert must perform.
When fastupdate = ON
- Pending list absorbs writes efficiently → best for single inserts
- Flush cycles controlled by gin_pending_list_limit
When fastupdate = OFF
- Inserts bypass the pending list and write directly to the index
- This increases CPU costs dramatically during both single and batch inserts
Test run /results and analysis
Below are some tests run result’s based on different options highlighted in the above sections.
A test table (of size 3 TB) uses a GIN index. We tested insert performance under different configurations of fastupdate (ON/OFF) for single and batch inserts.
SKU: 16 vcore and 8 TB storage
Setup:
- Create a table with a jsonb column and create a gin index on jsonb column.
- Insert around 2 TB of data
- setup an insert to perform single inserts for 15 minutes
- 2nd run around setup an insert to perform batch inserts for 15 minutes
Key Factor:
- Autovacuum was turned off, so pending list cleanup did not occur automatically.
- Runs were captured over a 15-minute window
Single Inserts:
Batch Inserts:
Analysis:
Fastupdate ON is optimal for single-row, write-heavy workloads (much lower per-insert latency), but it hurts sustained throughput and worst-case latency due to pending-list cleanups with significantly lower CPU usage.
Fastupdate OFF consistently wins for batch/bulk inserts, delivering ~1.5–1.7× higher throughput, significantly lower max execution time and more predictable behavior despite higher CPU consumption, making it the better choice for controlled batch loads.
Conclusion
GIN indexes are often treated as “build once and forget,” but for write-heavy systems, that mindset leaves significant performance on the table. By understanding how the pending list works—and tuning fastupdate and gin_pending_list_limit intentionally—you can dramatically improve both throughput and stability in large-scale PostgreSQL workloads.
If you routinely work with heavy JSONB or array ingestion, these settings deserve a permanent spot in your performance toolbox.