This document describes the deployment, table structure, usage, configuration, and disabling of the YMatrix query statistics feature.
The mxstat query statistics module collects information about query execution, including:
The module is included in the matrixmgr extension and is installed by default with the system.
[mxadmin@mdw ~]$ gpconfig -s shared_preload_libraries
Values on all segments are consistent
GUC : shared_preload_libraries
Master value: matrixts,matrixmgr,matrixgate,telemetry,mars2
Segment value: matrixts,matrixmgr,matrixgate,telemetry,mars2
To view statistics, you must create the matrixmgr database and install the matrixmgr extension within it. (The database and extension are created by default after YMatrix cluster initialization.)
createdb matrixmgr
psql -d matrixmgr
matrixmgr=# CREATE EXTENSION matrixmgr CASCADE;
NOTICE: installing required extension "matrixts"
CREATE EXTENSION
After successful deployment, the following tables and views appear in the matrixmgr_internal schema of the matrixmgr database:
Query execution information view.
This view displays statistics for queries executed since the last history collection (default interval: 5 minutes). It includes the following fields:
| Column | Type | Description |
|---|---|---|
| seg | integer | Segment ID where the query plan was generated and dispatched |
| userid | oid | User OID |
| dbid | oid | Database OID |
| queryid | bigint | Query ID generated by the extension to group similar queries |
| nestlevel | integer | Nesting depth |
| query | text | Query text |
| calls_begin | bigint | Number of times the query started |
| calls_alive | bigint | Number of queries still running at last collection |
| calls_end | bigint | Number of queries that completed normally |
| total_time | double precision | Total execution time for this query class, in milliseconds |
| min_time | double precision | Minimum execution time for this query class, in milliseconds |
| max_time | double precision | Maximum execution time for this query class, in milliseconds |
| mean_time | double precision | Average execution time for this query class, in milliseconds |
| stddev_time | double precision | Standard deviation of execution time, in milliseconds |
| sample_planid | bigint | Execution plan ID |
| sample_start | timestamp with time zone | Timestamp when the slowest query started |
| sample_parse_done | timestamp with time zone | Timestamp when parsing completed for the slowest query |
| sample_plan_done | timestamp with time zone | Timestamp when planning completed for the slowest query |
| sample_exec_start | timestamp with time zone | Timestamp when execution started for the slowest query |
| sample_exec_end | timestamp with time zone | Timestamp when execution ended for the slowest query |
Query resource consumption information view.
This view displays resource usage statistics for queries executed since the last history collection (default interval: 5 minutes). It includes the following fields:
| Column | Type | Description |
|---|---|---|
| seg | integer | Segment ID where the query executed |
| userid | oid | User OID |
| dbid | oid | Database OID |
| queryid | bigint | Query ID generated by the extension to group similar queries |
| nestlevel | integer | Nesting depth |
| rows | bigint | Total number of rows retrieved or affected by the statement |
| shared_blks_hit | bigint | Total number of shared block buffer hits |
| shared_blks_read | bigint | Total number of shared blocks read |
| shared_blks_dirtied | bigint | Total number of shared blocks dirtied |
| shared_blks_written | bigint | Total number of shared blocks written |
| local_blks_hit | bigint | Total number of local block buffer hits |
| local_blks_read | bigint | Total number of local blocks read |
| local_blks_dirtied | bigint | Total number of local blocks dirtied |
| local_blks_written | bigint | Total number of local blocks written |
| temp_blks_read | bigint | Total number of temporary blocks read |
| temp_blks_written | bigint | Total number of temporary blocks written |
| blk_read_time | double precision | Total time spent reading blocks, in milliseconds |
| blk_write_time | double precision | Total time spent writing blocks, in milliseconds |
| ru_utime | double precision | User CPU time, in seconds |
| ru_stime | double precision | System CPU time, in seconds |
| ru_maxrss | bigint | Physical memory used, including shared libraries, in KB |
| ru_ixrss | bigint | Integrated shared memory size |
| ru_idrss | bigint | Integrated unshared data size |
| ru_isrss | bigint | Integrated unshared stack size |
| ru_minflt | bigint | Number of minor page faults (no I/O required) |
| ru_majflt | bigint | Number of major page faults (I/O required) |
| ru_nswap | bigint | Number of swaps |
| ru_inblock | bigint | Number of input operations initiated by the file system |
| ru_oublock | bigint | Number of output operations initiated by the file system |
| ru_msgsnd | bigint | Number of messages sent |
| ru_msgrcv | bigint | Number of messages received |
| ru_nsignals | bigint | Number of signals received |
| ru_nvcsw | bigint | Number of voluntary context switches (process yields CPU) |
| ru_nivcsw | bigint | Number of involuntary context switches (time slice expired or preempted) |
This is a partitioned table that stores historical data from the mxstat_execute view, collected every 5 minutes by default. Its structure matches mxstat_execute, with an additional ts_bucket column indicating the collection timestamp.
This is a partitioned table that stores historical data from the mxstat_usage view, collected every 5 minutes by default. Its structure matches mxstat_usage, with an additional ts_bucket column indicating the collection timestamp.
Stores the mapping between queryid and querytext. Like other historical tables, it stores data periodically so that historical queries can retain their SQL text.
A view that joins mx_query_execute_history and mx_querytext on queryid, allowing retrieval of both historical query statistics and SQL text.
mxstat does not record every query individually. Instead, it groups similar queries into categories based on their parse trees.
For example, the following queries all insert into the test1 table with different parameters. They generate the same queryid and are grouped into one category:
INSERT INTO test1 VALUES(1);
INSERT INTO test1 VALUES(2);
INSERT INTO test1 VALUES(3);
The following queries differ only in conditional values and are also grouped together:
SELECT * FROM test1 WHERE c1 = 1;
SELECT * FROM test1 WHERE c1 = 2;
SELECT * FROM test1 WHERE c1 = 3;
However, the following queries access different tables and are not grouped:
SELECT * FROM test1 WHERE c1 = 1;
SELECT * FROM test2 WHERE c1 = 1;
SELECT * FROM test3 WHERE c1 = 1;
Execute the following three SQL statements:
SELECT pg_sleep(5);
SELECT pg_sleep(10);
SELECT pg_sleep(15);
Then query the mxstat_execute statistics:
matrixmgr=# SELECT * FROM matrixmgr_internal.mxstat_execute WHERE query LIKE '%pg_sleep%';
seg | userid | dbid | queryid | nestlevel | query | calls_begin | calls_alive | calls_end | total_time | min_time
| max_time | mean_time | stddev_time | sample_planid | sample_start | sample_parse_done |
sample_plan_done | sample_exec_start | sample_exec_end
-----+--------+-------+----------------------+-----------+---------------------+-------------+-------------+-----------+------------+----------
+-----------+--------------------+-------------------+----------------------+-------------------------------+-------------------------------+--
-----------------------------+-------------------------------+-------------------------------
-1 | 10 | 16384 | -2007749946425010549 | 0 | select pg_sleep($1) | 3 | 0 | 3 | 30041 | 5009.054
| 15018.717 | 10013.666666666666 | 4086.427819588182 | -2693056513545111817 | 2022-03-25 13:58:58.503851-04 | 2022-03-25 13:58:58.503933-04 | 2
022-03-25 13:58:58.503994-04 | 2022-03-25 13:58:58.504008-04 | 2022-03-25 13:59:13.522725-04
(1 row)
The result shows the query was called three times. The query text is normalized, with the time parameter replaced by $ + 参数序号. The total, minimum, maximum, and average execution times match expectations. Timestamps for each phase of the slowest execution (the 15-second sleep) are recorded.
To check resource usage, join with mxstat_execute.queryid:
matrixmgr=# SELECT * FROM matrixmgr_internal.mxstat_usage WHERE queryid = -2007749946425010549;
seg | userid | dbid | queryid | nestlevel | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_writte
n | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time | blk_write
_time | ru_utime | ru_stime | ru_maxrss | ru_ixrss | ru_idrss | ru_isrss | ru_minflt | ru_majflt | ru_nswap | ru_inblock | ru_oublock | ru_msgs
nd | ru_msgrcv | ru_nsignals | ru_nvcsw | ru_nivcsw
-----+--------+-------+----------------------+-----------+------+-----------------+------------------+---------------------+-------------------
--+----------------+-----------------+--------------------+--------------------+----------------+-------------------+---------------+----------
------+----------+----------+-----------+----------+----------+----------+-----------+-----------+----------+------------+------------+--------
---+-----------+-------------+----------+-----------
-1 | 10 | 16384 | -2007749946425010549 | 0 | 3 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0.001297 | 0.000431 | 20568 | 0 | 0 | 0 | 6 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 122 | 0
(1 row)
The result shows this query ran only on the Master node, so only one seg = -1 record exists.
Now consider another example:
CREATE TABLE test1(c1 int, c2 int) DISTRIBUTED BY(c1);
Query mxstat_execute:
matrixmgr=# SELECT * FROM matrixmgr_internal.mxstat_execute WHERE query LIKE '%create table test1%';
seg | userid | dbid | queryid | nestlevel | query | calls_begin | calls_alive |
calls_end | total_time | min_time | max_time | mean_time | stddev_time | sample_planid | sample_start | sample_parse_done | sa
mple_plan_done | sample_exec_start | sample_exec_end
-----+--------+-------+----------------------+-----------+-------------------------------------------------------+-------------+-------------+-
----------+------------+----------+----------+-----------+-------------+---------------+-------------------------------+-------------------+---
---------------+-------------------------------+-------------------------------
-1 | 10 | 16384 | -6276724884379903029 | 0 | create table test1(c1 int, c2 int) distributed by(c1) | 1 | 0 |
1 | 46.221 | 46.221 | 46.221 | 46.221 | 0 | 0 | 2022-03-25 14:08:51.754458-04 | |
| 2022-03-25 14:08:51.754735-04 | 2022-03-25 14:08:51.800956-04
(1 row)
Check resource usage:
matrixmgr=# SELECT * FROM matrixmgr_internal.mxstat_usage WHERE queryid = -6276724884379903029;
seg | userid | dbid | queryid | nestlevel | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_writte
n | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time | blk_write
_time | ru_utime | ru_stime | ru_maxrss | ru_ixrss | ru_idrss | ru_isrss | ru_minflt | ru_majflt | ru_nswap | ru_inblock | ru_oublock | ru_msgs
nd | ru_msgrcv | ru_nsignals | ru_nvcsw | ru_nivcsw
-----+--------+-------+----------------------+-----------+------+-----------------+------------------+---------------------+-------------------
--+----------------+-----------------+--------------------+--------------------+----------------+-------------------+---------------+----------
------+----------+----------+-----------+----------+----------+----------+-----------+-----------+----------+------------+------------+--------
---+-----------+-------------+----------+-----------
-1 | 10 | 16384 | -6276724884379903029 | 0 | 0 | 295 | 59 | 21 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0.004053 | 0 | 22744 | 0 | 0 | 0 | 429 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 6 | 0
1 | 10 | 16384 | -6276724884379903029 | 0 | 0 | 261 | 82 | 19 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0.001691 | 0.001558 | 19284 | 0 | 0 | 0 | 510 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 1
0 | 10 | 16384 | -6276724884379903029 | 0 | 0 | 314 | 34 | 19 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0.002537 | 0.000193 | 18508 | 0 | 0 | 0 | 574 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 1 | 1
2 | 10 | 16384 | -6276724884379903029 | 0 | 0 | 261 | 82 | 19 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0.003043 | 2.9e-05 | 19292 | 0 | 0 | 0 | 514 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 2
(4 rows)
The result shows resource usage recorded on all segments, as table creation occurs on each node.
In previous results, the nestlevel value was 0 because no queries were nested. The following example demonstrates nested execution, typically achieved via UDF calls.
CREATE OR REPLACE FUNCTION nest_query() RETURNS SETOF RECORD
AS $$ SELECT 1;$$
LANGUAGE SQL;
This creates a UDF named nest_query that executes the SQL: SELECT 1. Call the UDF:
Note: mxstat does not track nested statements by default. Set
mxstat_statements.trackto'all'.mxadmin=# SET mxstat_statements.track TO 'all'; SET mxadmin=# select nest_query(); nest_query ------------ (1) (1 row)
Now query the internal statement SELECT 1. Since 1 is a constant, the normalized query uses $1:
matrixmgr=# SELECT * FROM matrixmgr_internal.mxstat_execute WHERE query LIKE '%SELECT $1%';
seg | userid | dbid | queryid | nestlevel | query | calls_begin | calls_alive | calls_end | total_time | min_time | max_time
| mean_time | stddev_time | sample_planid | sample_start | sample_parse_done | sample_plan_done
| sample_exec_start | sample_exec_end
-----+--------+-------+----------------------+-----------+-----------+-------------+-------------+-----------+------------+----------+---------
-+-----------+-------------+---------------------+-------------------------------+-------------------------------+-----------------------------
--+-------------------------------+-------------------------------
-1 | 10 | 16384 | -4554727679305370053 | 1 | SELECT $1 | 1 | 0 | 1 | 0.031 | 0.031 | 0.031
| 0.031 | 0 | -382705668420232707 | 2022-03-25 14:35:30.668124-04 | 2022-03-25 14:35:30.668373-04 | 2022-03-25 14:35:30.668403-0
4 | 2022-03-25 14:35:30.668408-04 | 2022-03-25 14:35:30.668439-04
(1 row)
The nestlevel is now 1, indicating a nesting depth of 1. Deeper nesting increases this value. mxstat records up to 31 levels of nesting.
--- SPLIT ---
4 Configuration
mxstat provides the following GUC parameters to control monitoring behavior:
| Name | Type | Description |
|---|---|---|
| mxstat_statements.max | Integer | Number of hash slots for storing query timing information. Default is 5000 |
| mxstat_statements.usage_multiple | Integer | Multiplier for the number of hash slots used to store query resource usage, relative to query timing slots. A larger value ensures each query can find its corresponding resource consumption record. Default is 2 |
| mxstat_statements.track | String | Controls the level of statement tracking: top: Track only top-level statements, no nested tracking (default). all: Track all statements, including nested ones. none: Disable tracking, effectively turning off the feature |
| mxstat_statements.track_utility | Boolean | Whether to track execution of utility commands. Default is true |
| mxstat_statements.save | Boolean | Whether to dump in-memory statistics to disk during cluster restart. Default is true |
| mxstat_statements.harvest_interval | Integer | Interval (in minutes) at which query statistics are harvested from shared memory into historical tables. Default is 5 minutes |
| mxstat_statements.harvest_usage | Boolean | Whether to collect resource usage information. Default is true |
| mxstat_statements.ignored_databases | String | Comma-separated list of databases to ignore. Default is template0,template1,matrixmgr |
Note!
Stopping the query statistics feature does not require stopping or restarting the database.
First, stop the background worker process on the Master:
$ psql -d matrixmgr -c "SELECT matrixmgr_internal.mxstat_stop_worker();"
Next, disable the tracking feature cluster-wide and reload the configuration:
$ gpconfig -s mxstat_statements.track
$ gpconfig -c mxstat_statements.track -v none
$ mxstop -u
Finally, clear the collected data:
psql -d matrixmgr -c "SELECT matrixmgr_internal.mxstat_reset();"
To restart the background worker process, run:
$ psql -d matrixmgr -c "SELECT matrixmgr_internal.mxstat_start_worker();"