Quick onboard
Deployment
Data Modeling
Connecting
Migration
Query
Operations and Maintenance
Common Maintenance
Partition
Backup and Restore
Expansion
Mirroring
Resource Management
Security
Monitoring
Performance Tuning
Troubleshooting
Reference Guide
Tool guide
Data type
Storage Engine
Executor
Stream
DR (Disaster Recovery)
Configuration
Index
Extension
SQL Reference
Note!
YMatrix uses vectorized execution to improve query performance. Deployment will fail if the host (physical or virtual machine) does not support the AVX2 instruction set.
If you are deploying on a virtual machine, ensure that the underlying physical host supports AVX2 and that AVX2 is enabled in the VM configuration.
| Operating System | Supported CPU Architecture |
|---|---|
| CentOS 7 Red Hat Enterprise Linux 7 |
x86_64 |
This document uses a three-node cluster as an example:
mdw sdw1 and sdw2The server installation process consists of six steps: checking basic system information, preparation, installing the database RPM package, installing Python dependencies, deploying the database, and post-installation configuration.
Before starting the installation, check the basic system information. This is a good practice, as understanding the server helps you plan the cluster deployment effectively.
| Step | Command | Purpose |
|---|---|---|
| 1 | free -h |
View memory usage |
| 2 | df -h |
View disk space |
| 3 | lscpu |
View number of CPU cores |
| 4 | cat /etc/system-release |
View OS version |
| 5 | uname -a |
Display all kernel information in the following order: kernel name; hostname; kernel release; kernel version; hardware architecture; processor type (non-portable); hardware platform (non-portable); OS name. Fields -p and -i are omitted if unavailable. |
| 6 | tail -11 /proc/cpuinfo |
View CPU details |
Copy the RPM package to all nodes in the cluster.
~ scp <local_path> <username>@<server_ip>:<server_path>
Note!
Perform the following steps as therootuser or withsudoprivileges on all nodes.
YMatrix requires Python 3.6. Install it and set it as the default Python version:
# yum install centos-release-scl
# yum install rh-python36
# scl enable rh-python36 bash
Disable the firewall:
# systemctl stop firewalld.service
# systemctl disable firewalld.service
Disable SELinux by editing /etc/selinux/config and setting SELINUX=disabled:
# sed s/^SELINUX=.*$/SELINUX=disabled/ -i /etc/selinux/config
# setenforce 0
Ensure each node has a persistent hostname. If not already set, use the following command. In this example, the master node is named mdw:
# hostnamectl set-hostname mdw
Set hostnames for the two segment nodes as well (sdw1, sdw2):
# hostnamectl set-hostname sdw1
# hostnamectl set-hostname sdw2
Ensure all nodes can resolve each other by hostname and IP address. Add entries to /etc/hosts to map hostnames to local network interface addresses:
# vim /etc/hosts
For example, all three nodes should have the following entries in /etc/hosts:
192.168.100.10 mdw
192.168.100.11 sdw1
192.168.100.12 sdw2
Note!
You must install the database RPM package using therootuser orsudoon all nodes. System dependencies will be installed automatically.
By default, the package installs under /opt/matrixdb6.
Run the following command:
# yum install matrixdb6_6.2.0+enterprise-1.el7.x86_64.rpm
After successful installation, the supervisord and MXUI processes start automatically. These background services provide the web-based management interface and process control.
If you need to customize service ports, modify the configuration file after RPM installation. This step is required only on the Master node:
# vim /etc/matrixdb6/defaults.conf
YMatrix provides a graphical deployment tool. The remote web interface is accessible via ports 8240 and 4617. After installation, these ports are open by default on all nodes. The MXUI process provides the web interface.
Note!
If the graphical interface is unavailable, refer to Command-Line Installation.
Open a browser and navigate to the installation wizard URL. The server whose IP address you enter will become the Master node of your cluster (in this example, mdw):
http://<IP>:8240/
On the first page of the wizard, enter the superuser password. You can view the default password using the following command:

On the second page, select Multi-Node Deployment, then click Next.

Proceed with the four-step multi-node deployment process.
Step 1: Add Nodes
Click the "Add" button.

Enter the IP addresses, hostnames, or FQDNs of sdw1 and sdw2, then click Comfirm and Next.

Step 2: Configure Cluster Parameters
Enable Data Mirroring to create backup segments. This is recommended in production environments for high availability. The system automatically suggests the largest available disk and an appropriate number of segments based on system resources. Adjust according to your use case. The configured cluster topology is displayed visually. Click Next after confirmation.

Step 3: Set Data and etcd Storage Paths
Select a storage path for etcd on each server. The etcd cluster will be created on an odd number of servers to ensure leader election consistency and avoid ties.

If you enable Use Data Disk for etcd, be aware of the associated risks.

Step 4: Execute Deployment
Review the configuration summary. Click Deploy to begin.

The system automatically deploys the cluster and displays detailed progress. When all steps complete successfully, deployment is finished.

Deployment completed.

By default, YMatrix allows remote connections. If you did not select Allow Remote Access to Database during installation, manually add a line like the following to the pg_hba.conf file to allow all users from any IP to connect with password authentication:
host all all 0.0.0.0/0 md5
After making changes, switch to the mxadmin user and reload the configuration:
# su - mxadmin
$ mxstop -u
Use the following commands to start, stop, restart, or check the status of the YMatrix cluster:
$ mxstart -a
$ mxstop -af
$ mxstop -arf
$ mxstate -s
| Command | Purpose |
|---|---|
mxstop -a |
Stop the cluster. (Stops only after active sessions end.) |
mxstop -af |
Forcefully stop the cluster immediately. |
mxstop -arf |
Restart the cluster. Waits for currently running SQL statements to finish. (Stops only after active sessions end.) |
mxstate -s |
Check cluster status |