This document introduces the relevant parameters of the resource consumption category in the system configuration parameters.
Notes!
To ensure the stability and security of the system, please be sure to manually modify the relevant parameters**.
The resource consumption parameters are classified as follows:
Specifies the maximum amount of memory (KB) that each automatic cleanup worker process can use.
VACUUM
when running in other contexts.-1
, which means the value of maintenance_work_mem
is used| Data Type | Default Value | Value Range | Set Classification | | --- | --- | | int | -1 | -1 ~ (INT_MAX/1024) | segments; system; reload |
Specifies the dynamic shared memory implementation that the server should use.
posix
(for POSIX shared memory allocated using shm_open), sysv
(for System V shared memory allocated via shmget), windows
(for Windows shared memory), and mmap
(for memory mapped files stored in the data directory). Not all values are supported on all platforms, and the first supported option on the platform is its default value.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | | enum | posix | posix, sysv, windows, mmap | segments; system; restart |
Identifies the maximum percentage of system memory resources to be assigned to the resource group on each YMatrix database Segment node.
| Data Type | Default Value | Value Range | Set Classification | | --- | --- | | floating point | 0.7 | 0.1 ~ 1.0 | segments; system; restart |
Sets the amount of memory (MB) that all postgres processes for the active Segment node instance can use.
Note: The gp_vmem_protect_limit
server configuration parameter will take effect only when resource management is set to Resource Queue**.
If the query causes this parameter limit to exceed, memory will not be allocated and the query will fail.
When setting this parameter, only numeric values are specified. For example, to specify 4096MB, use the value 4096, and do not add unit MB to the value**.
To prevent over-allocating of memory, these calculations can estimate safe gp_vmem_protect_limit
values. First calculate the value gp_vmem. This is the YMatrix database memory available on the host.
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7
Where SWAP is the host switch space, and RAM is the RAM on the host, in GB.
Next, calculate max_acting_primary_segments
. This is the maximum number of Primary that can run on the host when Mirror is activated due to cluster failure. For example, if the Mirror is arranged in 4 host blocks with 8 primary each, a single Segment host failure activates two or three Mirrors on each remaining host in the failed host block.
The max_acting_primary_segments
value for this configuration is 11
(8 Primary plus 3 Mirrors activated on failure).
This is the calculation of gp_vmem_protect_limit
. This value should be converted to MB.
gp_vmem_protect_limit = gp_vmem / acting_primary_segments
For cases where a large number of working files are generated, this is the calculation of gp_vmem
of the working file.
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM - (300KB * total_#_workfiles))) / 1.7
Based on the gp_vmem
value, you can calculate the value of the vm.overcommit_ratio
operating system kernel parameters. This parameter is set when configuring each YMatrix database host.
vm.overcommit_ratio = (RAM - (0.026 * gp_vmem)) / RAM
Note: The default value of the kernel parameter vm.overcommit_ratio
in Red Hat Enterprise Linux is 50
.
If you use a graphical interface to deploy, this parameter will calculate the adaptation based on the server's hardware resources; if you use the command line to deploy, the default is 8192
.
| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 8192 | 0 ~ (INT_MAX/2) | segments; system; restart |
If the query executor process consumes more than this parameter value (MB), the process will not be cached for subsequent queries after the process is completed.
| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 500 | 1 ~ (INT_MAX/2) | segments; system; restart |
Sets the maximum number of temporary overflow files (also known as working files) allowed per query for each Segment.
0
to allow an unlimited number of overflow files.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 100000 | 0 ~ INT_MAX | master; session; reload |
Sets the maximum disk size (KB) that allows a single query to create temporary overflow files at each segment.
0
, indicating that no restrictions are enforced.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 0 | 0 ~ INT_MAX | master; session; reload |
Sets the maximum total disk size (KB) that allows all running queries to be used to create temporary overflow files in each segment.
0
, indicating that no restrictions are enforced.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 0 | 0 ~ INT_MAX | segments; system; restart |
Controls whether to request giant pages for the main shared memory area.
try
(default), on
and off
. If huge_pages
is set to try
, the server will try to request the giant page, and if it fails, it will fall back to the default mode. If on
, the request for giant page failure will prevent the server from starting. If off
, no giant page is requested.try
.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | enum | try | on / off / true / false / yes / no / 1 / 0 / try | segments; system; restart |
Specifies the maximum amount of memory (KB) used in maintenance operations such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
work_mem
.65536KB
, which is 64MB.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 65536 | 1024 ~ (INT_MAX/1024) | master; session; reload |
Sets the maximum number of transactions that can be in the ready state at the same time (see PREPARE TRANSACTION).
max_prepared_transactions
to at least be as large as max_connections
so that each session can have a pending preparatory transaction. If you have no intention of using a preparatory transaction, it needs to be set to 0
.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 50 | 1 ~ 262143 | segments; system; restart |
Specifies the maximum security depth (KB) of the execution stack of the server.
ulimit -s
or local equivalent), minus about one megabyte of security edge. This security edge is needed because not all routines in the server check stack depth, but only in critical possible routines.| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 100 | 100 ~ (INT_MAX/1024) | segments; session; reload; superuser |
Specifies the memory threshold. If the allocated memory exceeds the threshold (MB), it is dumped.
| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 1024 | 1 ~ (INT_MAX/2) | master; session; reload |
Print only memory sizes (MB) larger than this parameter.
| Data Type | Default Value | Value Range | Set Classification | | --- | --- | --- | | int | 1 | 0 ~ (INT_MAX/2) | master; session; reload |
Sets the amount of shared memory buffer (BLOCKS) that the YMatrix database server will use.PG:
128KB
and at least max_connections * 16KB
. For better performance, settings significantly higher than the minimum are typically used.shared_buffers
is 25% of system memory. Even though larger shared_buffers
values are effective, they can cause some workload. To distribute the processing of writing large amounts of new or modified data over a longer period of time, larger shared_buffers
settings typically require corresponding increases in max_wal_size
.Data Type | Default Value | Value Range | Set Classification |
---|---|---|---|
int | 4096 | 16 ~ (INT_MAX/2) | segments;system;restart |
The specified server is used for the shared memory implementation of the main shared memory region, including YMatrix's shared buffers and other shared data.
mmap
(for anonymous shared memory allocated using mmap), sysv
(for System V shared memory allocated via shmget), and windows
(for Windows shared memory). Not all values are supported on all platforms; The first supported option is the default option for that platform. The sysv
option is not the default option for any platform and is generally not recommended, as it typically requires non-default kernel settings to allow for large address allocations.Data type | Default value | Value range | Setting category |
---|---|---|---|
enum | mmap | mmap,sysv,windows | segments;system;restart |
Set the maximum memory (BLOCKS) for the temporary buffer for each database session.
temp_buffers
.Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 1024 | 100 ~ (INT_MAX/2) | master;session;reload |
Sets the maximum memory capacity (in KB) that query operations (such as sorting or hash tables) can use before writing to a temporary disk file.
ORDER BY
, DISTINCT
, and merge joins, while hashing operations are used in hash joins, hash-based aggregations, and hash-based IN
subqueries. Each running operation is allowed to use the memory specified by this parameter before beginning to write data to the temporary file.work_mem
value. This is an important consideration when determining the value.Data Type | Default Value | Valid Range | Setting Category |
---|---|---|---|
int | 32768 | 64 ~ (INT_MAX/1024) | master;session;reload |
Specify the maximum amount of disk space (in KB) that a process can use for temporary files (such as sort and hash temporary files, or storage files used to maintain cursors).
-1
, meaning no restriction.Data Type | Default Value | Valid Range | Setting Category |
---|---|---|---|
int | -1 | -1 或 0 ~ INT_MAX | segments;session;reload;superuser |
Set the maximum number of files that each server child process is allowed to open simultaneously.
Too many open files
, reduce this setting.Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 1000 | 25 ~ INT_MAX | segments;system;restart |
During the execution of the VACUUM
and ANALYZE
commands, the system maintains an internal counter to track the estimated cost of various I/O operations being performed. When the cumulative cost reaches a limit (specified by vacuum_cost_limit
), the process executing these operations will sleep for a short period of time as specified by vacuum_cost_delay
. It then resets the counter and continues execution.
The purpose of this feature is to allow administrators to reduce the I/O impact of these commands on concurrent database activities. In many cases, it is not critical for maintenance commands like VACUUM
and ANALYZE
to complete quickly; what matters is that these commands do not significantly impair the system's ability to perform other database operations. Cost-based cleanup delay provides a mechanism for administrators to ensure this.
For manually issued VACUUM
commands, this feature is disabled by default. To enable it, set the vacuum_cost_delay
variable to a non-zero value.
The amount of time (in milliseconds) that a process will sleep when it exceeds the overhead limit.
0
, which disables the cost-based cleanup delay feature. A positive value enables cost-based cleanup.vacuum_cost_delay
is typically small.Data type | Default value | Value range | Setting category |
---|---|---|---|
floating point | 0 | 0 ~ 100 | segments;session;reload |
The cumulative cost that will cause the cleanup process to go into hibernation.
Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 200 | 1 ~ 10000 | segments;session;reload |
The estimated cost of cleaning up a previously clean block. It represents the additional I/O required to flush the dirty block back to disk.
Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 20 | 0 ~ 10000 | segments;session;reload |
The estimated cost of flushing a buffer found in the shared cache. It represents the cost of locking the buffer pool, looking up the shared hash table, and scanning the page contents.
Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 1 | 0 ~ 10000 | segments;session;reload |
The cost of flushing a buffer that must be read from disk. It represents the cost of locking the buffer pool, looking up the shared hash table, reading the required blocks from disk, and scanning their contents.
Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 10 | 0 ~ 10000 | segments;session;reload |
Whenever the amount of data written by the backend exceeds this parameter value (BLOCKS), the OS will attempt to force these writes to be sent to the underlying storage. - This limits the amount of dirty data in the kernel page cache, reducing the likelihood of the system getting stuck when issuing fsync at the end of a checkpoint or when the OS is writing back large amounts of data in the background. This often results in a significant reduction in transaction latency, but in some cases (particularly when the load exceeds shared_buffers but is below the OS's page cache), performance may decrease.
0
, which disables forced write-back.Data Type | Default Value | Valid Range | Setting Category |
---|---|---|---|
int | 0 | 0 ~ 256 | segments;session;reload |
The YMatrix setting determines the number of concurrent disk I/O operations that can be executed simultaneously. - Increasing this value increases the number of I/O operations that any single YMatrix session attempts to initiate in parallel. The allowed range is 1
to 1000
, or 0
to disable asynchronous I/O requests.
posix_fadvise
function (which may not be available on some operating systems). If this function does not exist, setting this parameter to any value other than 0
will result in an error. On some operating systems (such as Solaris), although this function is provided, it does nothing.1
on supported systems, otherwise 0
.Data Type | Default Value | Valid Range | Setting Category |
---|---|---|---|
int | 1 | 0 ~ 1000 | segments;session;reload |
Set the maximum number of parallel workers that can be started by a single utility command.
max_worker_processes
, with the number controlled by max_parallel_workers
. Note: The actual number of workers requested at runtime may not be available. If this occurs, the utility operation will run with fewer workers than expected.maintenance_work_mem
as a limit for the entire utility command, regardless of how many parallel worker processes are used. However, parallel utility commands may still consume more CPU resources and I/O bandwidth in practice.Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 2 | 0 ~ 1024 | segments;session;reload |
Set the maximum number of workers supported by the system for parallel operations.
max_parallel_maintenance_workers
and max_parallel_workers_per_gather
.max_worker_processes
will have no effect, as parallel worker processes are drawn from the worker process pool created by max_worker_processes
.Data type | Default value | Value range | Setting category |
---|---|---|---|
int | 64 | 0 ~ 1024 | segments;session;reload |
Set the maximum number of workers that can be started by a single Gather or Gather Merge node.
max_worker_processes
, with the number limited by max_parallel_workers
.2
. Setting this value to 0
will disable parallel query execution.work_mem
). Resource limits like work_mem
are applied independently to each worker, meaning that the total resource utilization across all processes may be significantly higher than when using a single process. For example, a parallel query using 4 workers may consume 5 times more CPU time, memory, and I/O bandwidth than when not using workers.Data Type | Default Value | Valid Range | Setting Category |
---|---|---|---|
int | 2 | 0 ~ 1024 | master;session;reload |
Set the maximum number of background processes that the system can support.
max_parallel_workers
, max_parallel_maintenance_workers
, and max_parallel_workers_per_gather
.Data type | Default value | Valid range | Setting category |
---|---|---|---|
int | 69 | 1 ~ 262143 | segments;system;restart |
The minimum time (in minutes) that a snapshot can be used without risking a snapshot too old
error when using snapshots.
-1
(default) disables this feature, effectively setting the snapshot timeout to infinity.0
or 1min
) are allowed because they may sometimes be useful for testing. Although settings up to 86400 minutes
(60 days) are allowed, note that in many load scenarios, significant expansion or transaction ID rollback may occur within a very short time frame.snapshot too old
conditions. All space allocated to the relation will remain associated with it for reuse unless explicitly released (e.g., using VACUUM FULL
).snapshot too old
errors during scans.Data Type | Default Value | Value Range | Setting Category |
---|---|---|---|
int | -1 | -1 ~ 86400 | segments;system;restart |