Deploying a Cluster on CentOS 8 and Red Hat 8

  • Operating System and Platform Requirements
Operating System Supported CPU Architectures
CentOS 8 - 8.4
Red Hat Enterprise Linux 8
x86_64
ARM 64/v8
  • Example Cluster Information

This document uses a three-node cluster as an example: the master node is mdw, and the two segment nodes are sdw1 and sdw2.

Server Installation

The server installation process consists of five steps: checking basic server information, installation preparation, installing the database RPM package, deploying the database, and post-installation configuration.

1. Check Basic Server Information

Before starting the installation, check the basic system information. This is a good practice, as understanding the server helps you plan the cluster deployment more effectively.

Step Command Purpose
1 free -h View memory usage
2 df -h View disk space
3 lscpu View number of CPU cores
4 cat /etc/system-release View OS version
5 uname -a Display all kernel information in the following order (if -p or -i results are unknown, they are omitted): kernel name; hostname; kernel release; kernel version; hardware architecture; processor type (non-portable); hardware platform (non-portable); OS name
6 tail -11 /proc/cpuinfo View CPU details

2. Installation Preparation

2.1 Initialize the DNF Repository

CentOS 8 and RHEL 8 use dnf as the default package manager.

EPEL (Extra Packages for Enterprise Linux) provides additional packages not included in standard Red Hat and CentOS repositories. PowerTools contains libraries and developer tools. It is available on RHEL/CentOS but disabled by default. EPEL packages depend on PowerTools. If EPEL is enabled, PowerTools should also be enabled.

Perform the following steps as the root user or with root privileges.

# dnf -y install epel-release && \
  dnf -y install 'dnf-command(config-manager)' && \
  dnf config-manager --set-enabled powertools && \
  dnf config-manager --set-enabled epel && \ 
  dnf -y update

2.2 Install Dependencies

Install required dependencies as the root user or with root privileges.

# dnf install -y libicu python3-devel python3-pip openssl-devel openssh-server net-tools

2.3 Copy RPM Package to Servers

Copy the RPM package from your local machine to all cluster nodes.

~ scp <local file path> <username>@<server IP>:<server file path>

2.4 Modify System Configuration

Disable the firewall:

# systemctl stop firewalld.service
# systemctl disable firewalld.service

Disable SELinux. Edit /etc/selinux/config and set SELINUX=disabled:

# sed s/^SELINUX=.*$/SELINUX=disabled/ -i /etc/selinux/config
# setenforce 0

Stop the sssd services:

# systemctl stop sssd
# systemctl stop sssd-kcm.socket

Ensure each node has a persistent hostname. If not set, use the following command. For example, on the master node:

# hostnamectl set-hostname mdw

Set hostnames on the two segment nodes accordingly:

# hostnamectl set-hostname sdw1
# hostnamectl set-hostname sdw2

Ensure all nodes can reach each other via hostname or IP address. Add entries to /etc/hosts to map hostnames to local network interface addresses. For example, the /etc/hosts file on each of the three nodes should contain lines similar to:

192.168.100.10 mdw
192.168.100.11 sdw1
192.168.100.12 sdw2

3. Install YMatrix RPM Package

Note!
You must install the database RPM package on all server nodes using the root user or sudo privileges. System dependencies will be installed automatically.

# dnf install -y matrixdb6_6.2.0+enterprise-1.el8.x86_64.rpm

After successful installation, supervisord and MXUI processes start automatically. These background services provide the web-based management interface and process control.

If you need to customize ports, modify the /etc/matrixdb6/defaults.conf file after installing the RPM. This step is only required on the Master node.

# vim /etc/matrixdb6/defaults.conf

4. Deploy the Database

YMatrix provides a graphical deployment tool. The remote web interface is accessible via ports 8240 and 4617. After installation, these ports are open by default on all nodes. The MXUI process provides the web interface service.

Note!
If the graphical interface is unavailable, refer to command-line deployment.

Access the web-based installation wizard using a browser. The server whose IP address you enter will become the Master node of your cluster (in this example, mdw):

http://<IP>:8240/

On the first page of the installer, enter the superuser password. You can view it using the following command:

On the second page, select "Multi-node Deployment", then click Next.


Proceed with the four steps of multi-node deployment.

Step 1: Add Nodes
Click the "Add" button.

Enter the IP addresses, hostnames, or FQDNs of sdw1 and sdw2 in the text box. Click "Comfirm", then "Next".


Step 2: Configure Cluster Parameters
"Data Mirroring" determines whether segment nodes include backup mirrors. Enable mirroring in production environments for high availability. The system automatically recommends the largest disk and an appropriate number of segments based on system resources. Adjust according to your use case. The configured cluster topology can be viewed in the diagram. Click "Next" after confirmation.


Step 3: Set Data Storage and etcd Paths
Select a storage path for etcd on each server. The etcd cluster will be created on an odd number of servers randomly to ensure election consistency and avoid ties.

If you enable the checkbox to deploy etcd on data disks, be aware of the associated risks.


Step 4: Execute Deployment
This step displays the previously configured parameters. Review them and click "Deploy".

The system automatically deploys the cluster and shows detailed steps and progress. When all steps complete successfully, deployment is finished.

Deployment completed.

5. Post-Installation Configuration

By default, YMatrix allows remote connections. If you did not select "Allow remote access to the database" during installation, manually add a line like the following to the pg_hba.conf file. This allows any user from any IP to connect with password authentication. Adjust the IP range or database name as needed to reduce security risks:

# host  all       all   0.0.0.0/0  md5

After making changes, switch to the mxadmin user and reload the pg_hba.conf file:

# su - mxadmin
$ mxstop -u    

Use the following commands to start, stop, restart, or check the status of the YMatrix cluster:

$ mxstart -a
$ mxstop -af
$ mxstop -arf 
$ mxstate -s
Command Purpose
mxstop -a Stop the cluster. (Stops only after active sessions end; may hang if sessions are active.)
mxstop -af Forcefully stop the cluster immediately
mxstop -arf Restart the cluster. Waits for currently running SQL statements to finish. (May hang if sessions are active.)
mxstate -s Check cluster status