Build a PXC cluster under CentOS8

Introduction to PXC##

PXC is the abbreviation of [Percona XtraDB Cluster] (https://www.percona.com/doc/percona-xtradb-cluster/LATEST/index.html), and is a free [MySQL] (https://cloud.tencent.com/product/cdb?from=10680) cluster product produced by Percona. The function of PXC is to connect different mysql instances through the Galera cluster technology that comes with mysql to realize a multi-master cluster. In the PXC cluster, each mysql node is readable and writable, which is the master node in the master-slave concept. There is no read-only node.

PXC is actually a multi-master synchronous replication plug-in for OLTP based on Galera. PXC is mainly used to solve the problem of strong data synchronization in MySQL cluster. PXC can cluster any derivative version of mysql, such as MariaDB and Percona Server. Since the performance of Percona Server is closest to the mysql enterprise version, its performance is significantly improved compared to the standard version of mysql, and it is basically compatible with mysql. Therefore, when building a PXC cluster, it is usually recommended to build based on Percona Server.

For the selection of database cluster solutions, please refer to:

Features of PXC###


Install PXC and form a cluster##

Environmental preparation###

Environmental version description:

There are several common derivative versions of MySQL, and Percona Server is one of them. Percona Server is chosen here because it is the closest to the enterprise version of MySQL. The comparison chart of each derivative version is as follows:

The PXC cluster design of this article is shown in the figure:

According to the figure, we need to create three virtual machines to build a three-node PXC cluster:

Node description:

Node Host IP
Node1 PXC-Node1 192.168.190.132
Node2 PXC-Node2 192.168.190.133
Node3 PXC-Node3 192.168.190.134

The configuration of each virtual machine is as follows:

Regarding PXC clusters, performance is sacrificed to ensure strong data consistency. The more nodes in the PXC cluster, the longer the data synchronization time. So how many database servers should be used to do the cluster is the most suitable. Relatively speaking, can it achieve the best results in performance?

Generally speaking, no more than 15 nodes form a PXC cluster, and the performance is very good. Then this PXC cluster is used as a shard, and a few more shards are set on MyCat to deal with data segmentation and concurrent access.


System preparation

Some CentOS versions are bundled with mariadb-libs by default, you need to uninstall it before installing PXC:

[ root@PXC-Node1 ~]# yum -y remove mari*

PXC cluster uses four ports:

Port Description
3306 MySQL service port
4444 Request full synchronization (SST) port
4567 Communication port between database nodes
4568 Request incremental synchronization (IST) port

So if the system has a firewall enabled, these ports need to be opened:

[ root@PXC-Node1 ~]# firewall-cmd --zone=public--add-port=3306/tcp --permanent
[ root@PXC-Node1 ~]# firewall-cmd --zone=public--add-port=4444/tcp --permanent
[ root@PXC-Node1 ~]# firewall-cmd --zone=public--add-port=4567/tcp --permanent
[ root@PXC-Node1 ~]# firewall-cmd --zone=public--add-port=4568/tcp --permanent
[ root@PXC-Node1 ~]# firewall-cmd --reload

Install PXC

First go to the official document:

There are two simpler installation methods for PXC. One is to download the rpm package from the official website and install it locally on the system, and the other is to use the official yum repository for online installation. This article demonstrates this method of local installation, first open the following URL:

After selecting the appropriate version, copy the download link:

Then use the wget command to download on CentOS, as shown in the following example:

[ root@PXC-Node1 ~]# cd /usr/local/src
[ root@PXC-Node1 /usr/local/src]# wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-5.7.28-31.41/binary/redhat/8/x86_64/Percona-XtraDB-Cluster-5.7.28-31.41-r514-el8-x86_64-bundle.tar

Create a directory to store the rpm file, and extract the downloaded PXC installation package to the newly created directory:

[ root@PXC-Node1 /usr/local/src]# mkdir pxc-rpms
[ root@PXC-Node1 /usr/local/src]# tar -xvf Percona-XtraDB-Cluster-5.7.28-31.41-r514-el8-x86_64-bundle.tar -C pxc-rpms
[ root@PXC-Node1 /usr/local/src]# ls pxc-rpms
Percona-XtraDB-Cluster-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-57-debugsource-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-client-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-client-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-devel-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-full-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-garbd-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-garbd-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-server-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-server-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-shared-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-shared-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-test-57-5.7.28-31.41.1.el8.x86_64.rpm
Percona-XtraDB-Cluster-test-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm

In addition, the installation of PXC needs to depend on qpress and percona-xtrabackup-24, you can get the corresponding rpm package download link in percona-provided warehouse. Then enter the pxc-rpms directory to download the rpm packages of these two components, as follows:

[ root@PXC-Node1 /usr/local/src]# cd pxc-rpms
[ root@PXC-Node1 /usr/local/src/pxc-rpms]# wget https://repo.percona.com/release/8/RPMS/x86_64/qpress-11-1.el8.x86_64.rpm
[ root@PXC-Node1 /usr/local/src/pxc-rpms]# wget https://repo.percona.com/release/8/RPMS/x86_64/percona-xtrabackup-24-2.4.18-1.el8.x86_64.rpm

After completing the above steps, you can now install PXC locally via the yum command:

[ root@PXC-Node1 /usr/local/src/pxc-rpms]# yum localinstall -y *.rpm

After successful installation, there will be related commands of mysql in the system. As follows, if you can view the version information normally, the installation is successful:

[ root@PXC-Node1 /usr/local/src/pxc-rpms]# mysql --version
mysql  Ver 14.14 Distrib 5.7.28-31,forLinux(x86_64) using  7.0[root@PXC-Node1 /usr/local/src/pxc-rpms]#

Configure PXC cluster###

After installation, some configuration is required to start the cluster. The PXC configuration file is located in the /etc/percona-xtradb-cluster.conf.d/ directory by default, and the /etc/my.cnf file is just a reference to it:

[ root@PXC-Node1 ~]# cd /etc/percona-xtradb-cluster.conf.d/[root@PXC-Node1 /etc/percona-xtradb-cluster.conf.d]# ll
Total amount 12-rw-r--r--1 root root 381 12 1317:19 mysqld.cnf  #mysql related configuration
- rw-r--r--1 root root 44012 December 1317:19 mysqld_safe.cnf  # mysqld_safe related configuration
- rw-r--r--1 root root 106612 May 1317:19 wsrep.cnf  #PXC cluster related configuration

Add some basic configurations such as character sets to the mysqld.cnf file:

[ root@PXC-Node1 /etc/percona-xtradb-cluster.conf.d]# vim mysqld.cnf
[ mysqld]...

# Set character set
character_set_server=utf8
# Set the listening ip
bind-address=0.0.0.0
# Skip DNS resolution
skip-name-resolve

Then configure the PXC cluster, modify the following configuration items in the wsrep.cnf file:

[ root@PXC-Node1 /etc/percona-xtradb-cluster.conf.d]# vim wsrep.cnf
[ mysqld]
# The unique ID of the MySQL instance in the PXC cluster. It cannot be repeated and must be a number
server-id=1
# Path to Galera library file
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
# The name of the PXC cluster
wsrep_cluster_name=pxc-cluster
# IP of all nodes in the cluster
wsrep_cluster_address=gcomm://192.168.190.132,192.168.190.133,192.168.190.134
# The name of the current node
wsrep_node_name=pxc-node-01
# IP of the current node
wsrep_node_address=192.168.190.132
# Synchronization method(mysqldump、 rsync、 xtrabackup)
wsrep_sst_method=xtrabackup-v2
# Account password used during synchronization
wsrep_sst_auth=admin:Abc_123456
# Adopt strict synchronization mode
pxc_strict_mode=ENFORCING
# Copy based on ROW(Safe and reliable)
binlog_format=ROW
# Default engine
default_storage_engine=InnoDB
# Primary key self-growth does not lock the table
innodb_autoinc_lock_mode=2

Start PXC cluster###

So far, we have completed the installation and configuration of PXC on the virtual machine PXC-Node1. Then complete the same steps on the other two nodes, so I won’t repeat them here.

When all nodes are ready, use the following command to start the PXC cluster. Note that this command is used to start the first node. The first node can be any of the three nodes when the cluster is first started. Here I use PXC-Node1 as the first node. So execute this command under the virtual machine:

[ root@PXC-Node1 ~]# systemctl start [email protected]

The other nodes only need to start the MySQL service normally. After startup, they will automatically join the cluster according to the configuration in the wsrep.cnf file:

[ root@PXC-Node2 ~]# systemctl start mysqld

Disable the automatic startup of Percona Server:

[ root@localhost ~]# systemctl disable mysqld
Removed /etc/systemd/system/multi-user.target.wants/mysqld.service.
Removed /etc/systemd/system/mysql.service.[root@localhost ~]# 

Create a database account###

Then modify the default password of the root account. We can find the initial default password in the mysql log file. The red box in the figure below shows the default password:

Copy the default password, and then use the mysql_secure_installation command to change the password of the root account:

[ root@localhost ~]# mysql_secure_installation 

For security reasons, the root account generally does not allow remote login, so we need to create a separate database account for remote access. This account is also used to synchronize data in the PXC cluster, and corresponds to the configuration item wsrep_sst_auth in the wsrep.cnf file:

[ root@localhost ~]# mysql -uroot -p
mysql> create user 'admin'@'%' identified by 'Abc_123456';
mysql> grant all privileges on *.* to 'admin'@'%';
mysql> flush privileges;

After creating the account, use the client tool to perform a remote connection test to see if the connection is successful:

So far, we have completed the construction of the PXC cluster. You should be able to see the synchronization effect of the PXC cluster now, because the operations of modifying the root password and creating a new account above will be synchronized to the other two nodes. In other words, at this time, the root account passwords of the other two nodes are already modified, and there will also be an admin account. You can verify this yourself.

In addition, we can also use the following statement to confirm the status of the cluster:

show status like 'wsrep_cluster%';

Results of the:

Variable description:


Verify cluster data synchronization##

1、 Verify that the created database can be synchronized

Create a test library in node 1:

After the creation is complete, clicking on other nodes should also see the library test:

2、 Verify that the created data table can be synchronized

Create a student table in the test library in node 1:

After creation, you should be able to see this student table in other nodes:

3、 Verify that the table data can be synchronized

Insert a piece of data into the student table in node 1:

At this time, other nodes should also be able to see this data:


Description of cluster status parameters##

The status parameters of the cluster can be queried through SQL statements, as follows:

show status like '%wsrep%';

Since there are so many state parameter variables that can be queried, some commonly used ones are explained here. PXC cluster parameters can be divided into the following categories:

**PXC node state diagram: **

**PXC cluster state diagram: **

Official documents:


About the online and offline of PXC nodes##

1、 Safe offline posture of PXC node

How to start the node, just use the corresponding command to shut down


2、 If all PXC nodes are safely offline, then when starting the cluster, you need to start the last offline node first

When starting the cluster for the first time, any node can be started as the first node. But if it is a cluster that has already been started, when the cluster goes offline and then goes online, the last offline node needs to be started as the first node. In fact, about whether a node can be started as the first node, you can find out by looking at the grastate.dat file:

[ root@PXC-Node1 ~]# cat /var/lib/mysql/grastate.dat 
# GALERA saved state
version:2.1
uuid:    2c915504-39ac-11ea-bba7-a294386c4285
seqno:-1
safe_to_bootstrap:0[root@PXC-Node1 ~]#

**3、 If the PXC nodes are all exited accidentally, and not at the same time **

As mentioned at the beginning of this article, when more than half of the nodes in the PXC cluster are inaccessible due to unexpected downtime, the PXC cluster will stop running. However, if these PXC nodes exit in a safe offline manner, it will not cause the cluster to automatically stop running, but will only reduce the size of the cluster. The cluster will stop automatically only when more than half of the nodes go offline unexpectedly. Unexpected offline situations include:

As long as the nodes in the PXC cluster do not exit unexpectedly at the same time, when there is one node left in the cluster, the node will automatically change the value of safe_to_bootstrap in the grastate.dat file to 1. Therefore, when restarting the cluster, the last node that exits is also started first.


4、 If all PXC nodes exit unexpectedly at the same time, you need to modify the grastate.dat file

When all nodes in the cluster exit due to unexpected circumstances at the same time, then the safe_to_bootstrap of all nodes is 0, because no node has time to modify the value of safe_to_bootstrap. When the safe_to_bootstrap of all nodes is 0, the PXC cluster cannot be started.

In this case, we can only manually select a node, modify safe_to_bootstrap to 1, and then start this node as the first node:

[ root@PXC-Node1 ~]# vim /var/lib/mysql/grastate.dat 
...
safe_to_bootstrap:1[root@PXC-Node1 ~]# systemctl start [email protected]

Then start other nodes in turn:

[ root@PXC-Node2 ~]# systemctl start mysqld

5、 If there are still runnable nodes in the cluster, then other offline nodes only need to go online as normal nodes

[ root@PXC-Node2 ~]# systemctl start mysqld

Recommended Posts

Build a PXC cluster under CentOS8
Build a ScaleIO distributed storage cluster under CentOS7
(1) Centos7 installation to build a cluster environment
[PHP] Build a PHP operating environment under CentOS
Centos7 build Kubernetes cluster
Build docker environment under Centos6.5
Build OpenV** Server under CentOS7
Build OpenLDAP server under CentOS7
Redis cluster installation under CentOS
CentOs7.3 build SolrCloud cluster service
Build a basic environment for Java development under Centos7
First try to build a Ceph storage cluster on Centos7
RabbitMQ cluster deployment record under Centos6.9
Elasticsearch cluster deployment record under CentOS7
Build an FTP server under centos7
Introduce Mycat for PXC cluster and build a complete high-availability cluster architecture
Build a python development environment under Ubuntu
CentOS7 build jenkins
Centos build lnmp
Centos7 build python3.8.5+scrapy+gerapy
How to quickly build Nginx server under CentOS
Build Discuz Forum in LNMP Environment under CentOS7
Centos7 tutorial to build a master-slave DNS server
Build LEMP (Linux+Nginx+MySQL+PHP) environment under CentOS 8.1 (detailed tutorial)
Build Dedecms website in LNMP environment under CentOS7
How to build a LAMP environment on centos7.2
CentOS7.3 64 bit, build Zabbix3.4
CentOS build private git
Deploy GitBook under CentOS7
Linux (centos7) build gitlab
Build k8s1.9.9 on centos7
Compile Hadoop-2.7.6 under CentOS7.4
CentOS6.7 build LNMP environment
CentOS7.3.1611 deploys k8s1.5.2 cluster
CentOS6 install couchdb2 cluster
Install ActiveMQ under Centos7
Centos7.6 build LNMP environment
Install PostgreSQL12 under CentOS7
Install CentOS under VMware
CentOS 6.8 deploy zookeeper cluster
CentOS uses Nginx to build a download function server
Centos7 mqtt cluster installation
Centos8 implementation steps to build a local web server
Jenkins build on centos
Deploy JDK+Tomcat8 under CentOS
Install mysql under Centos 7
Configure lamp under centos6.8
Build Hadoop in CentOS
A centos initialization script
Install Jenkins under Centos 7
Redis3 installation under Centos7
CentOS cluster related issues
Centos7 deploys Kubernetes cluster
Centos7 build DNS service
Install MariaDB under MariaDB Centos7
Install mysql5.1 under CentOS6.5
CentOS7 deploys k8s cluster
CentOS 7 build LNMP environment
Xen virtualization combat under CentOS 6.6
[CentOS environment deployment] Java7/Java8 deployment under CentOS
CentOs7.3 build Solr stand-alone service