This document describes the system configuration parameters in the Resource Consumption category.
Note!
To ensure system stability and security, manually modifying these parameters should be done with caution.
Resource consumption parameters are categorized as follows:
Sets the maximum amount of memory (in KB) to be used by each autovacuum worker process.
VACUUM behavior when running in other contexts.-1, meaning it uses the value of maintenance_work_mem.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | -1 | -1 ~ (INT_MAX/1024) | segments; system; reload |
Specifies the dynamic shared memory implementation that the server should use.
posix (for POSIX shared memory allocated via shm_open), sysv (for System V shared memory allocated via shmget), windows (for Windows shared memory), and mmap (simulated shared memory using memory-mapped files stored in the data directory). Not all values are supported on all platforms. The first supported option on a platform is its default.mmap option is not the default on any platform and is generally discouraged, as the operating system repeatedly writes modified pages back to disk, increasing system I/O load.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| enum | posix | posix, sysv, windows, mmap | segments; system; restart |
Specifies the maximum percentage of system memory resources that can be allocated to resource groups on each YMatrix database Segment node.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| floating point | 0.7 | 0.1 ~ 1.0 | segments; system; restart |
Sets the total amount of memory (in MB) that all postgres processes on an active Segment instance can use.
Note: The gp_vmem_protect_limit server configuration parameter takes effect only when resource management is set to Resource Queues.
If a query causes memory usage to exceed this limit, no additional memory is allocated and the query fails.
When setting this parameter, specify only the numeric value. For example, to set 4096MB, use 4096, do not append the unit MB.
To prevent excessive memory allocation, use the following calculations to estimate a safe gp_vmem_protect_limit value. First, calculate gp_vmem, which represents the YMatrix database memory available on the host:
``
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7
` whereSWAPandRAM` are the host's swap space and physical RAM in GB, respectively.
Next, calculate acting_primary_segments, the maximum number of Primaries that could run on a single host if Mirrors are activated due to cluster failure. For example, if Mirrors are arranged in 4 host blocks with 8 Primaries per host, a single Segment host failure would activate 2 or 3 Mirrors on each remaining host in the block.
For this configuration, the acting_primary_segments value is 11 (8 Primaries plus 3 Mirrors activated on failure).
Then compute gp_vmem_protect_limit. The result should be in MB:
``
gp_vmem_protect_limit = gp_vmem / acting_primary_segments
``
For workloads generating many workfiles, use the following formula to adjust gp_vmem:
``
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM - (300KB * total_#_workfiles))) / 1.7
``
Based on the gp_vmem value, you can compute the vm.overcommit_ratio kernel parameter:
``
vm.overcommit_ratio = (RAM - (0.026 * gp_vmem)) / RAM
``
Note: The default value of kernel parameter vm.overcommit_ratio in Red Hat Enterprise Linux is 50.
If you deploy using a GUI tool, this parameter is automatically calculated based on server hardware. If deploying via command line, the default is 50.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 8192 | 0 ~ (INT_MAX/2) | segments; system; restart |
If a query executor process consumes more memory than this parameter value (in MB), the process will not be cached for reuse after completion.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 500 | 1 ~ (INT_MAX/2) | segments; system; restart |
Sets the maximum number of temporary spill files (also known as workfiles) allowed per query on each Segment.
-1 to allow unlimited spill files.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 100000 | 0 ~ INT_MAX | master; session; reload |
Sets the maximum disk space (in KB) allowed for temporary spill files created by a single query on each Segment.
0, meaning no limit is enforced.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 0 | 0 ~ INT_MAX | master; session; reload |
Sets the maximum total disk space (in KB) allowed for all running queries to create temporary spill files on each Segment.
0, meaning no limit is enforced.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 0 | 0 ~ INT_MAX | segments; system; restart |
Controls whether large pages are requested for the main shared memory area.
try (default), on, and off. If set to try, the server attempts to request large pages and falls back to the default method if the request fails. If set to on, failure to allocate large pages prevents the server from starting. If set to off, large pages are not requested.on is ignored.gpedit.msc). To start the database server as a single process (not as a Windows service) from a command window, the window must be run as administrator or User Account Control (UAC) must be disabled. When UAC is enabled, standard command windows revoke the "Lock Pages in Memory" privilege at startup.huge_pages).| Data Type | Default Value | Range | Context |
|---|---|---|---|
| enum | try | on/off/true/false/yes/no/1/0/try | segments; system; restart |
Sets the maximum amount of memory (in KB) to be used for maintenance operations such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
work_mem is safe.VACUUM and database restore operations.65536, i.e., 64MB.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 65536 | 1024 ~ (INT_MAX/1024) | master; session; reload |
Sets the maximum number of transactions that can be in the prepared state simultaneously (see PREPARE TRANSACTION).
max_prepared_transactions to at least as high as max_connections so that each session can have one pending prepared transaction. If not using prepared transactions, set this to 0.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 50 | 1 ~ 262143 | segments; system; restart |
Sets the maximum safe depth of the server's execution stack (in KB).
ulimit -s or its platform equivalent), minus approximately 1MB for safety. This safety margin is required because not all server routines check stack depth—only critical, potentially recursive routines do.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 100 | 100 ~ (INT_MAX/1024) | segments; session; reload; superuser |
Specifies a memory threshold. If allocated memory exceeds this threshold (in MB), it will be dumped.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1024 | 1 ~ (INT_MAX/2) | master; session; reload |
Only prints memory allocations larger than this parameter value (in MB).
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1 | 0 ~ (INT_MAX/2) | master; session; reload |
Sets the amount of shared memory buffers used by the YMatrix database server (in BLOCKS).
PG:
16 and at least 16MB. For better performance, values significantly higher than the minimum are typically used.shared_buffers is 25% of system memory. Even higher values may benefit some workloads. To spread out the processing of large amounts of new or changed data over time, higher shared_buffers settings often require corresponding increases in checkpoint_segments.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 4096 | 16 ~ (INT_MAX/2) | segments; system; restart |
Specifies the shared memory implementation used for the main shared memory area, including YMatrix's shared buffers and other shared data.
mmap (anonymous shared memory allocated via mmap), sysv (System V shared memory via shmget), and windows (Windows shared memory). Not all values are supported on all platforms; the first supported option is the default for that platform. The sysv option is not the default on any platform and is generally not recommended, as it often requires non-default kernel settings to allow large address allocations.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| enum | mmap | mmap, sysv, windows | segments; system; restart |
Sets the maximum amount of memory (in BLOCKS) used for temporary buffers per database session.
temp_buffers.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1024 | 100 ~ (INT_MAX/2) | master; session; reload |
Sets the maximum amount of memory (in KB) that can be used by query operations (e.g., sorting or hash tables) before writing to temporary disk files.
ORDER BY, DISTINCT, and merge joins use sorting, while hash joins, hash-based aggregates, and hash-based IN subquery processing use hash tables. Each running operation is allowed up to the amount specified by this parameter before spilling to temporary files.work_mem value. This is a key consideration when setting this parameter.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 32768 | 64 ~ (INT_MAX/1024) | master; session; reload |
Sets the maximum disk space (in KB) that a process can use for temporary files (e.g., sort and hash temporary files, or files used to hold cursors).
-1, meaning no limit.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | -1 | -1 or 0 ~ INT_MAX | segments; session; reload; superuser |
Sets the maximum number of files that each server subprocess can have open simultaneously.
too many open files, reduce this setting.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1000 | 25 ~ INT_MAX | segments; system; restart |
During the execution of VACUUM and ANALYZE commands, the system maintains an internal counter tracking the estimated cost of various I/O operations performed. When the accumulated cost reaches a limit specified by vacuum_cost_limit, the process performing the operation sleeps for a short time as specified by vacuum_cost_delay. It then resets the counter and continues.
The purpose of this feature is to allow administrators to reduce the I/O impact of these commands on concurrent database activity. In many cases, it is not important whether maintenance commands such as VACUUM and ANALYZE complete quickly; what matters is that they do not significantly affect the system's ability to perform other database operations. Cost-based vacuum delay provides a way for administrators to ensure this.
For manually issued VACUUM commands, this feature is disabled by default. To enable it, set the vacuum_cost_delay variable to a non-zero value.
The amount of time (in milliseconds) that a process will sleep when the cost limit is exceeded.
0, which disables cost-based vacuum delay. Positive values enable cost-based vacuuming.vacuum_cost_delay are typically very small.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| floating point | 0 | 0 ~ 100 | segments; session; reload |
The accumulated cost that causes the vacuum process to sleep.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 200 | 1 ~ 10000 | segments; session; reload |
Estimated cost of vacuuming a buffer that dirties a previously clean block. It represents the additional I/O required to flush the dirty block to disk.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 20 | 0 ~ 10000 | segments; session; reload |
Estimated cost of vacuuming a buffer found in the shared cache. It represents the cost of locking the buffer pool, searching the shared hash table, and scanning the page contents.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1 | 0 ~ 10000 | segments; session; reload |
Cost of vacuuming a buffer that must be read from disk. It represents the cost of locking the buffer pool, searching the shared hash table, reading the required block from disk, and scanning its contents.
| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 10 | 0 ~ 10000 | segments; session; reload |
Whenever a backend writes more than this amount of data (in BLOCKS), it attempts to force the OS to flush these writes to underlying storage.
fsync is issued) or when the OS writes back large batches in the background. This often greatly reduces transaction latency, though performance may degrade in some cases (especially when workload exceeds shared_buffers but remains below the OS page cache).0, meaning forced write-back is disabled.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 0 | 0 ~ 256 | segments; session; reload |
Sets the number of concurrent disk I/O operations that YMatrix can execute simultaneously.
1 to 1000, or 0 to disable asynchronous I/O requests.aio_write function (some operating systems may lack it). Setting this parameter to any value other than 0 will cause an error if the function is missing. On some systems (e.g., Solaris), the function exists but does nothing.1 on supported systems, otherwise 0.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 1 | 0 ~ 1000 | segments; session; reload |
Sets the maximum number of parallel workers that a single utility command can start.
max_worker_processes, controlled by max_parallel_workers. Note: The requested number of workers may not be available at runtime. If so, the utility operation runs with fewer workers than intended.2. Setting this to 0 disables parallel workers for utility commands.maintenance_work_mem as a limit for the entire command, regardless of the number of parallel worker processes used. However, parallel utility commands may still consume more CPU resources and I/O bandwidth.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 2 | 0 ~ 1024 | segments; session; reload |
Sets the maximum number of workers the system supports for parallel operations.
max_worker_processes and max_parallel_maintenance_workers.max_worker_processes has no effect, as parallel worker processes are drawn from the pool established by max_worker_processes.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 64 | 0 ~ 1024 | segments; session; reload |
Sets the maximum number of workers that a single Gather or Gather Merge node can start.
max_worker_processes, limited by max_parallel_workers.2. Setting this to 0 disables parallel query execution.work_mem). Resource limits such as work_mem are applied independently to each worker, meaning total resource usage across all processes can be much higher. For example, a parallel query using 4 workers may use up to 5 times the CPU time, memory, and I/O bandwidth of a non-parallel query.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 2 | 0 ~ 1024 | master; session; reload |
Sets the maximum number of background processes the system can support.
max_parallel_workers and max_parallel_maintenance_workers.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | 69 | 1 ~ 262143 | segments; system; restart |
Sets the minimum age (in minutes) that a snapshot can reach before risking an old snapshot error when used.
-1 (default) disables this feature, effectively setting the snapshot lifetime to infinity.5 or 10) are allowed as they may be useful for testing. Although values up to 86400 (60 days) are allowed, note that in many workloads, significant bloat or transaction ID wraparound can occur in much shorter timeframes.old snapshot conditions. All space allocated to a relation remains associated with it for reuse unless explicitly freed (e.g., via VACUUM FULL).old snapshot errors during scans.| Data Type | Default Value | Range | Context |
|---|---|---|---|
| int | -1 | -1 ~ 86400 | segments; system; restart |