Resource Management

Configuration Parameters

The following table lists the configurable resource group parameters.

Parameter Description Valid Range Default
CONCURRENCY Maximum number of concurrent transactions (active and idle) allowed in the resource group. [0 – max_connections] 20
CPU_MAX_PERCENT Maximum percentage of CPU resources the resource group can use. [1 – 100] -1 (disabled)
CPU_WEIGHT Scheduling priority weight for the resource group. [1 – 500] 100
CPUSET Specific logical CPU cores (or hyperthreads) reserved for the resource group. System-dependent -1
MEMORY_QUOTA Memory limit (in MB) assigned to the resource group. Integer (MB) -1 (disabled; uses statement_mem as per-query memory limit)
MIN_COST Minimum query plan cost required for a query to be managed by the resource group. Integer 0
IO_LIMIT Controls device I/O usage by setting maximum throughput and IOPS for read/write operations. Configured per tablespace. [2 – 4294967295 or max] -1

Note!
Resource groups do not apply to SET, RESET, or SHOW commands.

Configuration Details

When a user runs a query, YMatrix evaluates it against the limits defined for the assigned resource group.

Concurrency Management

CONCURRENCY controls the maximum number of concurrent transactions allowed in a resource group. The default is 20. A value of 0 disables query execution for the group.

If resource limits are not exceeded and the query does not violate the concurrency cap, YMatrix executes it immediately. If the concurrency limit is reached, subsequent transactions are queued until other queries complete.

The parameter gp_resource_group_queuing_timeout specifies how long a queued transaction waits before being canceled. The default is 0, meaning indefinite queuing.

Bypassing Resource Group Limits

  • gp_resource_group_bypass: When set to true, bypasses concurrency limits for the current session, allowing immediate query execution. Memory is allocated based on statement_mem. If insufficient memory is available, the query fails. This setting applies only at the session level and cannot be used inside a transaction or function.
  • gp_resource_group_bypass_catalog_query: Determines whether catalog queries bypass resource group limits. Default is true. Useful for GUI clients that run metadata queries. These queries operate outside resource groups and use statement_mem for memory allocation.
  • gp_resource_group_bypass_direct_dispatch: When set to true, direct-dispatch queries bypass CPU and memory limits of their assigned resource group. Memory is allocated via statement_mem, and queries fail if memory is insufficient. This setting is session-scoped only and cannot be used within transactions or functions.

CPU Resource Management

YMatrix supports two CPU allocation modes:

  1. Percentage-based allocation
  2. Core-based allocation

Different resource groups on the same cluster may use different allocation modes, but each group uses only one mode at a time. The mode can be changed at runtime.

The parameter gp_resource_group_cpu_limit sets the maximum percentage of system CPU resources that can be allocated to resource groups on each segment node.

Core-Based CPU Allocation

CPUSET specifies the CPU cores reserved for a resource group. When CPUSET is configured, YMatrix disables CPU_MAX_PERCENT and CPU_WEIGHT for that group and sets both to -1.

Usage Notes:

  • Use a semicolon (;) to separate master and segment core specifications. Use commas (,) to list individual cores or ranges, enclosed in single quotes. Example: '1;1,3-4' assigns core 1 on the master and cores 1, 3, and 4 on segments.
  • Avoid using CPU core 0. Prefer lower-numbered cores when assigning. If you restore a database to a node with fewer CPU cores than the original (e.g., restoring a 16-core configuration to an 8-core node), operations may fail. For example, assigning core 9 on a 16-core system will cause failure on an 8-core system.

Percentage-Based CPU Allocation

CPU_MAX_PERCENT sets the upper limit of CPU usage for a segment. For example, a value of 40 allows up to 40% of available CPU resources. Idle CPU time is pooled globally and can be borrowed by other resource groups.

CPU_WEIGHT determines the relative share of CPU time a group receives when contending for resources. The default is 100, with a range of 1–500.

Usage Notes:

  • If one group has a weight of 100 and two others have weights of 50 (all with CPU_MAX_PERCENT=100), the first group gets 50% of total CPU time, and the others get 25% each.
  • Adding another group with weight 100 reduces the first group’s share to 33%, with the others receiving ~16.5%, ~16.5%, and 33%.

Configuration Example

Group Name CONCURRENCY CPU_MAX_PERCENT CPU_WEIGHT
default_group 20 50 10
admin_group 10 70 30
system_group 10 30 10
test 10 10 10
  • default_group: Relative share = 10/(10+30+10+10) = 16%. Under high load, it gets at least 16% CPU. With idle capacity, it can use up to its hard limit of 50%.
  • admin_group: Relative share = 30/60 = 50%. Can use up to 70% when system is idle.
  • test: Relative share = 16%, but capped at 10% due to CPU_MAX_PERCENT=10, even when idle.

Memory Resource Management

MEMORY_QUOTA defines the total memory (in MB) reserved for a resource group on each segment. This is the aggregate memory available to all active queries in the group. Per-query memory is calculated as MEMORY_QUOTA / CONCURRENCY.

To override this, use gp_resgroup_memory_query_fixed_mem at the session level to specify a fixed memory amount for a query, which can exceed the group’s allocation.

Usage Notes:

  • If gp_resgroup_memory_query_fixed_mem is set, it overrides resource group memory settings.
  • If unset, per-query memory = MEMORY_QUOTA / CONCURRENCY.
  • If MEMORY_QUOTA is not set, statement_mem is used as the per-query limit.
  • If system memory is insufficient, queries spill to disk. If the limit gp_workfile_limit_files_per_query is reached, YMatrix raises an out-of-memory (OOM) error.

Example Scenario

Consider a resource group adhoc with MEMORY_QUOTA = 1536 MB and CONCURRENCY = 3. Each query normally gets 512 MB.

  • User ADHOC_1 submits Q1 with gp_resgroup_memory_query_fixed_mem = 800 MB → allowed.
  • User ADHOC_2 submits Q2 with default 512 MB.
  • While Q1 and Q2 run, ADHOC_3 submits Q3 (512 MB). Total usage = 1312 MB. If system memory permits, Q3 runs.
  • User ADHOC_4 submits Q4 with gp_resgroup_memory_query_fixed_mem = 700 MB. Since it bypasses group limits, it runs immediately.

Special Considerations

  • Queries bypassing limits via gp_resource_group_bypass or gp_resource_group_bypass_catalog_query use statement_mem as their memory limit.
  • If (MEMORY_QUOTA / CONCURRENCY) < statement_mem, then statement_mem is used as the fixed per-query memory.
  • statement_mem is capped by max_statement_mem.
  • Queries with plan cost below MIN_COST use statement_mem as their memory quota.

I/O Resource Management

IO_LIMIT restricts maximum disk I/O throughput and IOPS (reads/writes per second) for queries in a resource group, ensuring fair bandwidth allocation and preventing saturation. This parameter is configured per tablespace.

Note!
IO_LIMIT is supported only with cgroup v2.

I/O limits are configured using:

  • Tablespace name or OID (use * for all tablespaces).
  • rbps / wbps: Maximum read/write throughput in MB/s. Default: max (unlimited).
  • riops / wiops: Maximum read/write IOPS. Default: max (unlimited).

Configuration Notes

If IO_LIMIT is not set, all I/O parameters default to max (no limits). If only some values are specified (e.g., rbps), the unspecified ones (e.g., wbps, riops, wiops) default to max.