--- title: Read Throughput --- As a general rule of thumb, the performance of expensive search queries can be greatly improved if they are able to access more parallel Postgres workers and more shared buffer memory. ## Parallel Workers There are three settings that control how many parallel workers ultimately get assigned to a query. First, `max_worker_processes` is a global limit for the number of workers. Next, `max_parallel_workers` is a subset of `max_worker_processes`, and sets the limit for workers used in parallel queries. Finally, `max_parallel_workers_per_gather` limits how many workers a _single query_ can receive. ```init postgresql.conf max_worker_processes = 72 max_parallel_workers = 64; max_parallel_workers_per_gather = 4; ``` In the above example, the maximum number of workers that a single query can receive is set to `4`. The `max_parallel_workers` pool is set to `64`, which means that `16` queries can execute simultaneously with `4` workers each. Finally, `max_worker_processes` is set to `72` to give headroom for other workers like autovacuum and replication. In practice, we recommend experimenting with different settings, as the best configuration depends on the underlying hardware, query patterns, and volume of data. If all `max_parallel_workers` are in use, Postgres will still execute additional queries, but those queries will run without parallelism. This means that queries do not fail — they just may run slower due to lack of parallelism. ## Shared Buffers `shared_buffers` controls how much memory is available to the Postgres buffer cache. We recommend allocating no more than 40% of total memory to `shared_buffers`. ```bash postgresql.conf shared_buffers = 8GB ``` The `pg_prewarm` extension can be used to load the BM25 index into the buffer cache after Postgres restarts. A higher `shared_buffers` value allows more of the index to be stored in the buffer cache. ```sql CREATE EXTENSION pg_prewarm; SELECT pg_prewarm('search_idx'); ``` ## Autovacuum If an index experiences frequent writes, the search performance of some queries like [sorting](/documentation/full-text/sorting) or [aggregates](/documentation/aggregates) can degrade if `VACUUM` has not been recently run. This is because writes can cause parts of Postgres' visibility map to go out of date, and `VACUUM` updates the visibility map. To determine if search performance is degraded by lack of `VACUUM`, run `EXPLAIN ANALYZE` over a query. A `Parallel Custom Scan` in the query plan with a large number of `Heap Fetches` typically means that `VACUUM` should be run. Postgres can be configured to automatically vacuum a table when a certain number of rows have been updated. Autovacuum settings can be set globally in `postgresql.conf` or for a specific table. ```sql ALTER TABLE mock_items SET (autovacuum_vacuum_threshold = 500); ``` There are several [autovacuum settings](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html), but the important ones to note are: 1. `autovacuum_vacuum_scale_factor` triggers an autovacuum if a certain percentage of rows in a table have been updated. 2. `autovacuum_vacuum_threshold` triggers an autovacuum if an absolute number of rows have been updated. 3. `autovacuum_naptime` ensures that vacuum does not run too frequently. This means that setting `autovacuum_vacuum_scale_factor` to `0` and `autovacuum_vacuum_threshold` to `100000` will trigger an autovacuum for every `100000` row updates. To determine the right autovacuum settings for your table, we recommend examining the table's write volumes and adjusting the autovacuum threshold accordingly.