Connect with Prometheus Monitoring Ecology

YMatrix also provides a self-developed exporter, which can be perfectly connected to the Prometheus monitoring ecosystem.

It should be noted that YMatrix's exporter and corresponding Dashboard only contains monitoring of the database itself and do not include monitoring of the operating system metrics. Operating system-related metric monitoring requires additional exporter support, such as node_exporter. The module layout is as follows: Module layout diagram

1 Deployment

YMatrix exporter is included in the YMatrix installation package and can be used with activation. Then install and deploy node_exporter, Grafana, Prometheus.

1.1 Activate YMatrix exporter

  • Create matrixmgr database
    createdb matrixmgr;
  • Connect to the matrixmgr database, create matrixts and matrixmgr extensions, and activate exporter
    
    psql -d matrixmgr

matrixmgr=# CREATE EXTENSION matrixts; matrixmgr=# CREATE EXTENSION matrixmgr;

matrixmgr=# SELECT mxmgr_init_exporter();

After success, you can observe that a new pattern called "exporter" appears under the matrixmgr database. Tables and views in this mode contains cluster monitoring and configuration information. Do not change the definitions and contents of these tables and views by yourself.

This command starts matrixdb_exporter on all machines in the cluster.

> ***Note!***
If the cluster has started the old monitoring system, it is necessary to shut down the old monitoring system first, otherwise it will fail.
Close method: SELECT mxmgr_remove_all('local');

### 1.2 node_exporter installation and configuration
node_exporter is used to monitor operating system-related metrics. Download the latest version of [node_exporter](https://prometheus.io/download/) from the official website. Here we take 1.3.1 as an example (using root user operation).

Download node_exporter

wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz

Unzip the installation package

tar -xvf node_exporter-1.3.1.linux-amd64.tar.gz -C /usr/local

Create a soft link

ln -s /usr/local/node_exporter-1.3.1.linux-amd64/ /usr/local/node_exporter

Generate systemctl configuration file

cat << EOF > /usr/lib/systemd/system/node_exporter.service [Service] User=root Group=root ExecStart=/usr/local/node_exporter/node_exporter

[Install] WantedBy=multi-user.target

[Unit] Description=node_exporter After=network.target

EOF

Start node_exporter

systemctl start node_exporter systemctl status node_exporter systemctl enable node_exporter

> ***Note!***
node_exporter needs to be deployed on all hosts of the cluster, so the above steps must be performed on all hosts of the cluster.

### 1.3 Prometheus installation and configuration
Prepare a host that can access all host exporter ports of the cluster, which can be a Master or Standby Master node, or a separate machine (all for Linux, MacOS, Windows, etc.).

Install the latest version of Prometheus. The official download and installation link is [https://prometheus.io/download/](https://prometheus.io/download/).

The following commands describe the operation method using CentOS 7 as an example. For commands for other operating systems, please refer to the corresponding operating system usage guide.

* **Download and install Prometheus (requires root permission)**

> ***Notes!***  
During the installation of Prometheus, you can choose whether to install the mxgate monitoring interface. The following example holds the mxgate monitoring installation code. If it is not installed, delete the code. See the sample code comment for details.

<a name="systemd"><br/></a>

Download the installation package

wget https://github.com/prometheus/prometheus/releases/download/v2.36.1/prometheus-2.36.1.linux-amd64.tar.gz

Unzip to /usr/local and create a soft link

tar -xf ./prometheus-2.36.1.linux-amd64.tar.gz -C /usr/local ln -s /usr/local/prometheus-2.36.1.linux-amd64/ /usr/local/prometheus

Create a Prometheus user

useradd -s /sbin/nologin -M prometheus

Create a data directory

mkdir /data/prometheus -p

Modify the user and group to which the directory belongs

chown -R prometheus:prometheus /usr/local/prometheus/ chown -R prometheus:prometheus /data/prometheus/

Configure systemctl

cat << EOF > /etc/systemd/system/prometheus.service [Unit] Description=Prometheus Documentation=https://prometheus.io/ After=network.target [Service] Type=simple User=prometheus ExecStart=/usr/local/prometheus/prometheus --config.file=/usr/local/prometheus/prometheus.yml --storage.tsdb.path=/data/prometheus Restart=on-failure [Install] WantedBy=multi-user.target EOF

Modify Prometheus configuration file

Add IP and ports of matrixdb and node exporter in scrape_configs

(matrixdb_exporter default port 9273, node_exporter default port 9100):

It should also be noted that the monitoring diagrams of matrixdb and mxgate are used for cluster filtering through the matrixdb_cluster variable.

So to add the relabel_configs configuration, replace the job tag with matrixdb_cluster,

Fill in the cluster name in replacement

You can refer to the following configuration:

scrape_configs:

  • job_name: "matrixdb_exporter" relabel_configs:
    • source_labels: ['job'] regex: .* target_label: matrixdb_cluster replacement: cluster1 action : replace static_configs:
    • targets: ["localhost:9273"]
  • job_name: "node_exporter" static_configs:
    • targets: ["localhost:9100"]

      When the cluster contains multiple hosts, the IP port of each host needs to be added to the targets array

      For example:

      scrape_configs:

  • job_name: "matrixdb_exporter" relabel_configs:
    • source_labels: ['job'] regex: .* target_label: matrixdb_cluster replacement: cluster1 action : replace static_configs:
    • targets: ["192.168.0.1:9273", "192.168.0.2:9273", "192.168.0.3:9273"]
  • job_name: "node_exporter" static_configs:
    • targets: ["192.168.0.1:9100", "192.168.0.2:9100", "192.168.0.3:9100"]

      If you want to deploy mxgate monitoring, you need to add the following segment at this location and restart Prometheus. This code is not required.

  • job_name: "gate_exporter" relabel_configs:
    • source_labels: ['job'] regex: .* target_label: matrixdb_cluster replacement: cluster1 action : replace static_configs:
    • targets: ["192.168.0.1:9275"]

      Note: Pay attention to indentation when editing yaml files, otherwise syntax errors will occur, resulting in failure to start

      Start Prometheus

      systemctl start prometheus systemctl status prometheus systemctl enable prometheus

      
      After startup, you can access the WebUI through the UI interface provided by Prometheus and view the running status, `http://IP:9090/`

Select Status->Targets in the Prometheus main panel View Data Source 1

You will see that the newly configured matrixdb_exporter and node_exporter and Prometheus' own exporter status are all UP, indicating that the monitoring deployment is successful. View Data Source 2

1.4 Grafana installation and configuration

Prometheus is used to store monitoring data, and Grafana pulls monitoring data from Prometheus and displays it. Just like installing Prometheus, prepare a machine that can access Prometheus (Grafana and Prometheus can be installed on the same machine).

It should be noted that the Grafana version cannot be lower than 8.0.0, and the latest version is recommended. The official download and installation link is https://grafana.com/grafana/download.

The following commands describe the operation method using CentOS 7 as an example. For commands for other operating systems, please refer to the corresponding operating system usage guide (requires the root user operation).

  • Download and install Grafana
    wget https://dl.grafana.com/enterprise/release/grafana-enterprise-8.5.5-1.x86_64.rpm
    yum install grafana-enterprise-8.5.5-1.x86_64.rpm
  • Start Grafana
    sudo systemctl daemon-reload
    sudo systemctl start grafana-server
    sudo systemctl status grafana-server
    sudo systemctl enable grafana-server

    After the installation is complete, use the browser to access the host's port 3000:

    http://<安装节点的IP或者域名>:3000

    You can see the Grafana homepage. Log in with the default username and password (admin/admin). To ensure security, please change your password yourself.

1.5 Adding data sources and panels

After the installation and deployment of exporter, Prometheus, Grafana, you need to load the dashboard to display the monitoring chart.

Each dashboard depends on a data source, so first you need to add a data source:

  1. Click the Settings button on the Grafana main interface and select Data Sources: Add Data Source 1

  2. Then click Add data source: Add Data Source 2

  3. Select Prometheus in the database category: Add Data Source 3

  4. Give the data source a name in the data source name, such as: MatrixDB-Prometheus; then fill in the IP and port of Prometheus in the URL: Add Data Source 4

  5. After the data source is added, the Dashboard is loaded below. Click the plus button on the Grafana main panel and select Import: Add dashboard_1

  6. Import has several ways: official URL or ID, paste Json into a text box, load from a file. Here we choose to load the file from the Prometheus Dashboard of matrixdb in the installation directory $GPHOME/share/doc/postgresql/extension/PrometheusDashboard.json, to load the file: Add dashboard_2

When loading the file, you need to select the Prometheus data source, and here you select the source you just created. Add dashboard_3

Then, you can see the panel that just loaded in the panel list.

For detailed interpretation of panel monitoring, please refer to YMatrix Monitoring Parameter Interpretation

1.6 Add node_exporter panel

The above steps demonstrate how to add the Prometheus Dashboard for YMatrix. The following describes how to deploy node_exporter's Dashboard as well.

Loading node_exporter Dashboard: Because the corresponding Dashboard has been published in the official website of Grafana, just fill in the ID to load.

During the loading process, you need to select the data source and select the MatrixDB-Prometheus source you just added. If node_exporter has a separate Prometheus cluster, you need to add a data source separately. node_exporter_2

2 Management

After cluster state acquisition is activated, each host will run the acquisition service and the relevant logs are saved in the /var/log/matrixdb directory.

If you restart YMatrix, or restart the server and then start YMatrix, the YMatrix exporter will also start automatically without manual intervention.

To close the matrixdb exporter service, connect to the matrixmgr database and execute the mxmgr_remove_exporter command:

psql -d matrixmgr

matrixmgr=# SELECT mxmgr_remove_exporter();

To activate data acquisition again, connect to the matrixmgr database and execute the mxmgr_deploy_exporter command:

matrixmgr=# SELECT mxmgr_deploy_exporter();

Note! mxmgr_remove_exporter will only close matrixdb exporter, node_exporter, Grafana, and Prometheus will need to be closed separately.

3 Upgrade

The old monitoring system has been upgraded to the new monitoring that connects to the Prometheus ecosystem.

First complete the Prometheus installation and deployment and upgrade Grafana to the latest version:

# Turn off the old monitor first
psql -d matrixmgr
matrixmgr=# SELECT mxmgr_remove_all('local');

# Initialize and start a new monitoring
matrixmgr=# SELECT mxmgr_init_exporter();

Note! New and old monitoring can also be deployed at the same time, that is, new monitoring can be started without shutting down the old monitoring, but it is a bit redundant.

4 MatrixGate Monitoring Panel Deployment

As a high-performance data access component, MatrixGate is also compatible with the Prometheus monitoring ecosystem. Normally, MatrixDB monitoring data and MatrixGate monitoring data are stored in the same Prometheus system, and the following operation steps are also assumptions.

4.1 Deploy gate_exporter

Like YMatrix exporter, deploying gate exporter also requires creating a matrixmgr database with matrixts and matrixmgr extensions. Here it is assumed that it has been created.

Then call mxmgr_init_gate_exporter to initialize and start:

matrixmgr=# SELECT mxmgr_init_gate_exporter();

After success, you can observe that a new pattern called "gate_exporter" appears under the matrixmgr database. Tables and views in this mode contains monitoring and configuration information of gate. Do not change the definitions and contents of these tables and views by yourself.

This command starts gate_exporter on the Master host.

To close gate_exporter, execute mxmgr_remove_gate_exporter:

matrixmgr=# SELECT mxmgr_remove_gate_exporter();

If you start again, execute mxmgr_deploy_gate_exporter:

matrixmgr=# SELECT mxmgr_deploy_gate_exporter();

4.2 Loading the monitoring panel

Just like loading the YMatrix monitoring panel, the panel file of MatrixGate is in $GPHOME/share/doc/postgresql/extension/MxgateDashboard.json, and you can load the file.

For detailed interpretation of panel monitoring, please refer to MatrixGate Monitoring Parameter Interpretation