Resource Consumption

This document describes the system configuration parameters in the Resource Consumption category.

Note!
To ensure system stability and security, manually modifying these parameters should be done with caution.

Resource consumption parameters are categorized as follows:


Memory

autovacuum_work_mem


Sets the maximum amount of memory (in KB) to be used by each autovacuum worker process.

  • This setting has no effect on VACUUM behavior when running in other contexts.
  • The default value is -1, meaning it uses the value of maintenance_work_mem.
Data Type Default Value Range Context
int -1 -1 ~ (INT_MAX/1024) segments; system; reload

dynamic_shared_memory_type


Specifies the dynamic shared memory implementation that the server should use.

  • Supported options are posix (for POSIX shared memory allocated via shm_open), sysv (for System V shared memory allocated via shmget), windows (for Windows shared memory), and mmap (simulated shared memory using memory-mapped files stored in the data directory). Not all values are supported on all platforms. The first supported option on a platform is its default.
  • The mmap option is not the default on any platform and is generally discouraged, as the operating system repeatedly writes modified pages back to disk, increasing system I/O load.
Data Type Default Value Range Context
enum posix posix, sysv, windows, mmap segments; system; restart

gp_resource_group_memory_limit


Specifies the maximum percentage of system memory resources that can be allocated to resource groups on each YMatrix database Segment node.

  • Note: This configuration parameter takes effect only when resource management is set to Resource Groups.
Data Type Default Value Range Context
floating point 0.7 0.1 ~ 1.0 segments; system; restart

gp_vmem_protect_limit


Sets the total amount of memory (in MB) that all postgres processes on an active Segment instance can use.

  • Note: The gp_vmem_protect_limit server configuration parameter takes effect only when resource management is set to Resource Queues.

  • If a query causes memory usage to exceed this limit, no additional memory is allocated and the query fails.

  • When setting this parameter, specify only the numeric value. For example, to set 4096MB, use 4096, do not append the unit MB.

  • To prevent excessive memory allocation, use the following calculations to estimate a safe gp_vmem_protect_limit value. First, calculate gp_vmem, which represents the YMatrix database memory available on the host: ``

    gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7

    ` whereSWAPandRAM` are the host's swap space and physical RAM in GB, respectively.

  • Next, calculate acting_primary_segments, the maximum number of Primaries that could run on a single host if Mirrors are activated due to cluster failure. For example, if Mirrors are arranged in 4 host blocks with 8 Primaries per host, a single Segment host failure would activate 2 or 3 Mirrors on each remaining host in the block.

  • For this configuration, the acting_primary_segments value is 11 (8 Primaries plus 3 Mirrors activated on failure).

  • Then compute gp_vmem_protect_limit. The result should be in MB: ``

    gp_vmem_protect_limit = gp_vmem / acting_primary_segments

    ``

  • For workloads generating many workfiles, use the following formula to adjust gp_vmem: ``

    gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM - (300KB * total_#_workfiles))) / 1.7

    ``

  • Based on the gp_vmem value, you can compute the vm.overcommit_ratio kernel parameter: ``

    vm.overcommit_ratio = (RAM - (0.026 * gp_vmem)) / RAM

    ``

  • Note: The default value of kernel parameter vm.overcommit_ratio in Red Hat Enterprise Linux is 50.

  • If you deploy using a GUI tool, this parameter is automatically calculated based on server hardware. If deploying via command line, the default is 50.

Data Type Default Value Range Context
int 8192 0 ~ (INT_MAX/2) segments; system; restart

gp_vmem_protect_segworker_cache_limit


If a query executor process consumes more memory than this parameter value (in MB), the process will not be cached for reuse after completion.

  • Systems with many connections or idle processes may reduce this value to free up more memory on Segments.
Data Type Default Value Range Context
int 500 1 ~ (INT_MAX/2) segments; system; restart

gp_workfile_limit_files_per_query


Sets the maximum number of temporary spill files (also known as workfiles) allowed per query on each Segment.

  • Spill files are created when queries require more memory than available. Queries exceeding this limit are terminated.
  • Set the value to -1 to allow unlimited spill files.
Data Type Default Value Range Context
int 100000 0 ~ INT_MAX master; session; reload

gp_workfile_limit_per_query


Sets the maximum disk space (in KB) allowed for temporary spill files created by a single query on each Segment.

  • The default value is 0, meaning no limit is enforced.
Data Type Default Value Range Context
int 0 0 ~ INT_MAX master; session; reload

gp_workfile_limit_per_segment


Sets the maximum total disk space (in KB) allowed for all running queries to create temporary spill files on each Segment.

  • The default value is 0, meaning no limit is enforced.
Data Type Default Value Range Context
int 0 0 ~ INT_MAX segments; system; restart

huge_pages


Controls whether large pages are requested for the main shared memory area.

  • Valid values are try (default), on, and off. If set to try, the server attempts to request large pages and falls back to the default method if the request fails. If set to on, failure to allocate large pages prevents the server from starting. If set to off, large pages are not requested.
  • Currently, this setting is supported only on Linux and Windows. On other systems, setting it to on is ignored.
  • Using large pages reduces page table size and CPU time spent on memory management, improving performance.
  • On Windows, large pages are called "locked pages in memory." To use them, the Windows user account running YMatrix must have the "Lock Pages in Memory" privilege. This can be assigned using the Windows Group Policy tool (gpedit.msc). To start the database server as a single process (not as a Windows service) from a command window, the window must be run as administrator or User Account Control (UAC) must be disabled. When UAC is enabled, standard command windows revoke the "Lock Pages in Memory" privilege at startup.
  • Note: This setting affects only the main shared memory area. Operating systems such as Linux, FreeBSD, and Illumos can automatically use large ("super" or "huge") pages for regular memory without explicit requests from YMatrix. On Linux, this is known as Transparent Huge Pages (THP). THP is known to degrade YMatrix performance on certain Linux versions and is not recommended (unlike the explicit use of huge_pages).
Data Type Default Value Range Context
enum try on/off/true/false/yes/no/1/0/try segments; system; restart

maintenance_work_mem


Sets the maximum amount of memory (in KB) to be used for maintenance operations such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.

  • Only one such operation can execute at a time per session, and typically few such operations run concurrently across a database installation. Therefore, setting this value significantly higher than work_mem is safe.
  • Larger values can improve performance of VACUUM and database restore operations.
  • The default value is 65536, i.e., 64MB.
Data Type Default Value Range Context
int 65536 1024 ~ (INT_MAX/1024) master; session; reload

max_prepared_transactions


Sets the maximum number of transactions that can be in the prepared state simultaneously (see PREPARE TRANSACTION).

  • YMatrix internally uses prepared transactions to ensure data integrity across Segments.
  • If using prepared transactions, set max_prepared_transactions to at least as high as max_connections so that each session can have one pending prepared transaction. If not using prepared transactions, set this to 0.
Data Type Default Value Range Context
int 50 1 ~ 262143 segments; system; restart

max_stack_depth


Sets the maximum safe depth of the server's execution stack (in KB).

  • The ideal setting is the actual stack size limit enforced by the kernel (set by ulimit -s or its platform equivalent), minus approximately 1MB for safety. This safety margin is required because not all server routines check stack depth—only critical, potentially recursive routines do.
Data Type Default Value Range Context
int 100 100 ~ (INT_MAX/1024) segments; session; reload; superuser

mx_dump_mctx_threshold


Specifies a memory threshold. If allocated memory exceeds this threshold (in MB), it will be dumped.

Data Type Default Value Range Context
int 1024 1 ~ (INT_MAX/2) master; session; reload

mx_dump_print_filter


Only prints memory allocations larger than this parameter value (in MB).

Data Type Default Value Range Context
int 1 0 ~ (INT_MAX/2) master; session; reload


shared_buffers


Sets the amount of shared memory buffers used by the YMatrix database server (in BLOCKS).

PG:

  • This setting must be at least 16 and at least 16MB. For better performance, values significantly higher than the minimum are typically used.
  • On a dedicated database server with 1GB or more RAM, a reasonable starting value for shared_buffers is 25% of system memory. Even higher values may benefit some workloads. To spread out the processing of large amounts of new or changed data over time, higher shared_buffers settings often require corresponding increases in checkpoint_segments.
  • This parameter can only be set at server startup. Changes require a restart.
Data Type Default Value Range Context
int 4096 16 ~ (INT_MAX/2) segments; system; restart

shared_memory_type


Specifies the shared memory implementation used for the main shared memory area, including YMatrix's shared buffers and other shared data.

  • Supported options are mmap (anonymous shared memory allocated via mmap), sysv (System V shared memory via shmget), and windows (Windows shared memory). Not all values are supported on all platforms; the first supported option is the default for that platform. The sysv option is not the default on any platform and is generally not recommended, as it often requires non-default kernel settings to allow large address allocations.
Data Type Default Value Range Context
enum mmap mmap, sysv, windows segments; system; restart

temp_buffers


Sets the maximum amount of memory (in BLOCKS) used for temporary buffers per database session.

  • This parameter applies only to session-local buffers used for temporary tables.
  • This setting can be changed within a session, but only before the session first uses a temporary table; subsequent changes within the session have no effect.
  • A session allocates temporary buffers as needed, up to the limit specified by temp_buffers.
Data Type Default Value Range Context
int 1024 100 ~ (INT_MAX/2) master; session; reload

work_mem


Sets the maximum amount of memory (in KB) that can be used by query operations (e.g., sorting or hash tables) before writing to temporary disk files.

  • Note that for complex queries, multiple sort or hash operations may run in parallel. Operations such as ORDER BY, DISTINCT, and merge joins use sorting, while hash joins, hash-based aggregates, and hash-based IN subquery processing use hash tables. Each running operation is allowed up to the amount specified by this parameter before spilling to temporary files.
  • Additionally, multiple concurrent sessions may perform such operations, so the total memory used could be many times the work_mem value. This is a key consideration when setting this parameter.
  • This parameter can only be set at server startup. Changes require a restart.
Data Type Default Value Range Context
int 32768 64 ~ (INT_MAX/1024) master; session; reload


Disk

temp_file_limit


Sets the maximum disk space (in KB) that a process can use for temporary files (e.g., sort and hash temporary files, or files used to hold cursors).

  • A transaction attempting to exceed this limit will be canceled.
  • This setting limits the total space used by all temporary files for a given YMatrix process at any moment. Note that disk space used by explicitly created temporary tables is not constrained by this setting.
  • The default value is -1, meaning no limit.
Data Type Default Value Range Context
int -1 -1 or 0 ~ INT_MAX segments; session; reload; superuser


Kernel Resource Usage

max_files_per_process


Sets the maximum number of files that each server subprocess can have open simultaneously.

  • You need not worry about this setting if the kernel enforces a safe per-process limit. However, on some platforms (especially most BSD systems), the kernel allows individual processes to open more files than the system can support. If you encounter failures such as too many open files, reduce this setting.
Data Type Default Value Range Context
int 1000 25 ~ INT_MAX segments; system; restart


Cost-based Vacuum Delay

During the execution of VACUUM and ANALYZE commands, the system maintains an internal counter tracking the estimated cost of various I/O operations performed. When the accumulated cost reaches a limit specified by vacuum_cost_limit, the process performing the operation sleeps for a short time as specified by vacuum_cost_delay. It then resets the counter and continues.

The purpose of this feature is to allow administrators to reduce the I/O impact of these commands on concurrent database activity. In many cases, it is not important whether maintenance commands such as VACUUM and ANALYZE complete quickly; what matters is that they do not significantly affect the system's ability to perform other database operations. Cost-based vacuum delay provides a way for administrators to ensure this.

For manually issued VACUUM commands, this feature is disabled by default. To enable it, set the vacuum_cost_delay variable to a non-zero value.

vacuum_cost_delay


The amount of time (in milliseconds) that a process will sleep when the cost limit is exceeded.

  • The default value is 0, which disables cost-based vacuum delay. Positive values enable cost-based vacuuming.
  • When using cost-based vacuuming, appropriate values for vacuum_cost_delay are typically very small.
Data Type Default Value Range Context
floating point 0 0 ~ 100 segments; session; reload

vacuum_cost_limit


The accumulated cost that causes the vacuum process to sleep.

Data Type Default Value Range Context
int 200 1 ~ 10000 segments; session; reload

vacuum_cost_page_dirty


Estimated cost of vacuuming a buffer that dirties a previously clean block. It represents the additional I/O required to flush the dirty block to disk.

Data Type Default Value Range Context
int 20 0 ~ 10000 segments; session; reload

vacuum_cost_page_hit


Estimated cost of vacuuming a buffer found in the shared cache. It represents the cost of locking the buffer pool, searching the shared hash table, and scanning the page contents.

Data Type Default Value Range Context
int 1 0 ~ 10000 segments; session; reload

vacuum_cost_page_miss


Cost of vacuuming a buffer that must be read from disk. It represents the cost of locking the buffer pool, searching the shared hash table, reading the required block from disk, and scanning its contents.

Data Type Default Value Range Context
int 10 0 ~ 10000 segments; session; reload


Asynchronous Behavior

backend_flush_after


Whenever a backend writes more than this amount of data (in BLOCKS), it attempts to force the OS to flush these writes to underlying storage.

  • This limits the amount of dirty data in the kernel page cache, reducing the likelihood of stalls during checkpoint finalization (when fsync is issued) or when the OS writes back large batches in the background. This often greatly reduces transaction latency, though performance may degrade in some cases (especially when workload exceeds shared_buffers but remains below the OS page cache).
  • This setting may have no effect on some platforms.
  • The default value is 0, meaning forced write-back is disabled.
Data Type Default Value Range Context
int 0 0 ~ 256 segments; session; reload

effective_io_concurrency


Sets the number of concurrent disk I/O operations that YMatrix can execute simultaneously.

  • Increasing this value increases the number of I/O operations a single YMatrix session attempts to initiate in parallel. The allowed range is 1 to 1000, or 0 to disable asynchronous I/O requests.
  • Currently, this setting affects only bitmap heap scans.
  • For disk drives, a good starting point is the number of individual drives in the RAID 0 stripe or RAID 1 mirror used for the database (for RAID 5, do not count the parity drive). However, if the database frequently handles multiple queries from concurrent sessions, a lower value may suffice to keep the disk array busy. Values higher than needed only add CPU overhead. SSDs and other memory-based storage can often handle many concurrent requests, so their optimal value may be in the hundreds.
  • Asynchronous I/O depends on an effective aio_write function (some operating systems may lack it). Setting this parameter to any value other than 0 will cause an error if the function is missing. On some systems (e.g., Solaris), the function exists but does nothing.
  • The default is 1 on supported systems, otherwise 0.
  • For tables in a specific tablespace, this value can be overridden by setting the same-named parameter for that tablespace (see ALTER TABLESPACE).
Data Type Default Value Range Context
int 1 0 ~ 1000 segments; session; reload

max_parallel_maintenance_workers


Sets the maximum number of parallel workers that a single utility command can start.

  • Currently, the only utility command that supports parallel workers is CREATE INDEX, and only B-tree index builds can be parallelized.
  • Parallel workers are drawn from a pool created by max_worker_processes, controlled by max_parallel_workers. Note: The requested number of workers may not be available at runtime. If so, the utility operation runs with fewer workers than intended.
  • The default value is 2. Setting this to 0 disables parallel workers for utility commands.
  • Note: Parallel utility commands should not consume more memory than equivalent non-parallel operations. This differs from parallel queries, where resource limits are typically applied per worker process. Parallel utility commands treat resource limits such as maintenance_work_mem as a limit for the entire command, regardless of the number of parallel worker processes used. However, parallel utility commands may still consume more CPU resources and I/O bandwidth.
Data Type Default Value Range Context
int 2 0 ~ 1024 segments; session; reload

max_parallel_workers


Sets the maximum number of workers the system supports for parallel operations.

  • When increasing or decreasing this value, consider adjusting max_worker_processes and max_parallel_maintenance_workers.
  • Also, note that setting this value higher than max_worker_processes has no effect, as parallel worker processes are drawn from the pool established by max_worker_processes.
Data Type Default Value Range Context
int 64 0 ~ 1024 segments; session; reload

max_parallel_workers_per_gather


Sets the maximum number of workers that a single Gather or Gather Merge node can start.

  • Parallel workers are drawn from a pool established by max_worker_processes, limited by max_parallel_workers.
  • Note: The requested number of workers may not be available at runtime. If so, the plan runs with fewer workers, potentially reducing efficiency.
  • The default value is 2. Setting this to 0 disables parallel query execution.
  • Note: Parallel queries may consume more resources than non-parallel queries, as each worker process is a fully independent process with system impact similar to an additional user session. This should be considered when choosing a value and configuring other resource-limiting settings (e.g., work_mem). Resource limits such as work_mem are applied independently to each worker, meaning total resource usage across all processes can be much higher. For example, a parallel query using 4 workers may use up to 5 times the CPU time, memory, and I/O bandwidth of a non-parallel query.
Data Type Default Value Range Context
int 2 0 ~ 1024 master; session; reload

max_worker_processes


Sets the maximum number of background processes the system can support.

  • When running a standby server, this parameter must be set to a value equal to or greater than that on the primary server; otherwise, queries may not be allowed on the standby.
  • When changing this value, consider also adjusting max_parallel_workers and max_parallel_maintenance_workers.
Data Type Default Value Range Context
int 69 1 ~ 262143 segments; system; restart

old_snapshot_threshold


Sets the minimum age (in minutes) that a snapshot can reach before risking an old snapshot error when used.

  • Dead data older than this threshold may be vacuumed. This helps prevent snapshot bloat caused by long-running snapshots. To prevent incorrect results due to visible data being removed, an error is raised when a snapshot older than this threshold is used to read a page modified since the snapshot was created.
  • A value of -1 (default) disables this feature, effectively setting the snapshot lifetime to infinity.
  • Useful values in production range from several hours to days. Small values (e.g., 5 or 10) are allowed as they may be useful for testing. Although values up to 86400 (60 days) are allowed, note that in many workloads, significant bloat or transaction ID wraparound can occur in much shorter timeframes.
  • When this feature is enabled, space freed at the end of a relation cannot be returned to the OS, as it may contain information needed to detect old snapshot conditions. All space allocated to a relation remains associated with it for reuse unless explicitly freed (e.g., via VACUUM FULL).
  • This setting does not guarantee an error in all cases. For example, if a correct result can be generated from a cursor that has materialized a result set, no error is raised even if underlying rows have been vacuumed. Some tables (e.g., system catalogs) cannot be safely vacuumed early and are unaffected by this setting. For such tables, this setting does not reduce bloat or the likelihood of old snapshot errors during scans.
Data Type Default Value Range Context
int -1 -1 ~ 86400 segments; system; restart