Skip to main content

How to install MySQL Percona XtraDB Cluster 5.7 on Ubuntu 18.04 LTS - Part 2

This guide describes the procedure for setting up Percona XtraDB Cluster 5.7 on Ubuntu 18.04 LTS three nodes in a multi-master replication. Multi-master replication allows writing of records in each node, so if a node will fail, we can work on the other as if nothing happened.

First of all, why we choose three nodes and not only two? In any cluster, the number of nodes should be odd, so in the case of disconnection of a node, we assume that the highest group of servers has the fresh data, and should be replicated to the down node to avoid data loss. This is related only to resolve conflicts in data replication, we won't loose data written only to the disconnected node.

This is used to avoid a circumstance called split brain, in which we can't automatically choose which node has correct data. Think for example of a 2 node cluster where both nodes are disconnected from each other, and the same record is written to both nodes: who wins when they come back online? We don't know, so split brain happens, and we have to manually decide wich record is the right one.

The number of nodes that is needed to determine wich part of the cluster has the right data is called QUORUM, in our case, the quorum will be 2. So we need 2 servers always be connected to each other. In case all three nodes will go down, we have a split brain and we must decide wich server should go in bootstrap mode manually, this is the procedure to determine wich will be the main server to resume from the split brain.
Avoid creating a cluster with two or any even number of nodes, because this can lead to split brain. For more information, see Cluster Failover.
This tutorial describes how to install and configure three node MySQL Percona XtraDB Cluster on Ubuntu 18 servers, we will be using the packages from the Percona repositories. I set it up with 3 servers including HaProxy Load Balancer with these IPs.
DB LB ( HaProxy )
Hostname: dblb.awsmonster.com
IP address: 10.10.4.80

DB Node 1
Hostname: db1.awsmonster.com
IP address: 10.10.3.230

DB Node 2
Hostname: db2.awsmonster.com
IP address: 10.10.4.81

DB Node 3
Hostname: db3.awsmonster.com
IP address: 10.10.7.12 
On each host, modify file /etc/hosts as follows to ensure DNS will work correctly.
# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.10.4.80     dblb.awsmonster.com	dblb
10.10.3.230     db1.awsmonster.com	db1
10.10.4.81     db2.awsmonster.com	db2
10.10.7.12     db3.awsmonster.com	db3

:x  (save & quit) 
Create a common script called serverreadiness.sh  run it on four server with Hostname 
# touch  serverreadiness.sh
# chmod  775 serverreadiness.sh
# vi  serverreadiness.sh 

#!/bin/bash

apt install vim wget net-tools -y
/etc/init.d/apparmor stop
systemctl disable apparmor
systemctl mask apparmor
apt remove --auto-remove ufw -y
apt purge --auto-remove ufw -y
apt remove --auto-remove fail2ban

timedatectl set-ntp yes
timedatectl set-timezone Asia/Dhaka
hostnamectl set-hostname db1.awsmonster.com ( Change it on each Node )
#apt update && apt upgrade -y

fallocate -l 8G /swapfile
dd if=/dev/zero of=/swapfile count=8192 bs=1MiB
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
swapon -s
echo "/swapfile                               swap                      swap    defaults        0 0" >> /etc/fstab
mount -a

sysctl vm.swappiness=10
sysctl vm.vfs_cache_pressure=50
cp -a /etc/sysctl.conf /root/
echo "vm.swappiness=10" >> /etc/sysctl.conf
echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf

apt update && apt upgrade -y
#apt --fix-broken upgrade -y ; For Debain 10
#apt-get autoremove
#reboot 

:x   ( save & quit )
Don't forget to Install development tools! How to install development tools on Debian or Ubuntu 16.04 LTS ? 
Dependencies on Ubuntu: When installing on a Ubuntu system, make sure that the universe repository is enabled to satisfy all essential dependencies.
Make sure that the following ports are not blocked by firewall or used by other software. Percona XtraDB Cluster requires them for communication.
  • 3306 TCP
  • 4444 TCP
  • 4567 TCP & UDP 
  • 4568 TCP
  • 9200 TCP
  • 873 TCP ( rsync )
Note To view the listening ports, enter the following command:  
 $ sudo ss -tunlp 

Installing Percona Xtradb Cluster 

On all three DB Nodes, execute the following commands as root: ( It's better to create a script )
sudo apt update && apt upgrade -y
apt-get autoremove
wget https://repo.percona.com/apt/percona-release_0.1-6.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_0.1-6.$(lsb_release -sc)_all.deb
sudo apt-get update
sudo apt-get install percona-xtradb-cluster-57 
Note : Must enter the mysql password you like to choose. After the packages have been installed, mysqld will start automatically.

Percona XtraDB Cluster is distributed with several useful UDF (User Defined Function) from Percona Toolkit. Run the following commands on all DB nodes to create these functions:
# vi .my.cnf
[client]
password='PASS-WORD'
:x ( save & close )

# mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'"
# mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'"
# mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'" 
Note: Stop mysqld on all three nodes using /etc/init.d/mysql stop.

Configuring the First node

Make sure to add these lines to configuration file /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf for the first node (db1.awsmonster.com) at the end of [mysqld] section:
# vi /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://10.10.3.230,10.10.4.81,10.10.7.12

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

sql_mode=""
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=10.10.3.230
# Cluster name
wsrep_cluster_name=awsmonster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=db1

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=DISABLED

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth="sstuser:SST-PassWord"
 
Pay attention to the password you setup there in my case "SST-PassWord".

Start the first node with the following command, This command will start the first node and bootstrap the cluster, you will see something like this if all is ok:
root@db1:~# /etc/init.d/mysql bootstrap-pxc
[ ok ] Bootstrapping Percona XtraDB Cluster database server: mysqld .. 
After the first node has been started, connect to mysql with classic mysql command, then cluster status can be checked executing the query show status like 'wsrep%'; as in the example below:
mysql> show status like 'wsrep%'; 
This output shows that the cluster has been successfully bootstrapped.

Create SST user on First Node

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'SST-PassWord';
mysql> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

OR

mysql> CREATE USER 'sstuser'@'%' IDENTIFIED BY 'SST-PassWord';
mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'%';
mysql> FLUSH PRIVILEGES; 

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;

Configuring the Second node 

Append the following lines to the configuration file /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf on the second node (db2.awsmonster.com), so that it contains the following data:
# vi /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://10.10.3.230,10.10.4.81,10.10.7.12

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

sql_mode=""
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=10.10.4.81
# Cluster name
wsrep_cluster_name=awsmonster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=db2

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=DISABLED

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth="sstuser:SST-PassWord"
 
Start the second node with the following command (attention this time as you can see is not in boostrap mode!!):
 root@db2:~# /etc/init.d/mysql start 
After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of the status from the second node (db2.awsmonster.com):
 mysql> show status like 'wsrep%'; 

Configuring the Third node 

Append the following lines to the configuration file /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf on the third node (db3.awsmonster.com), so that it contains the following data:
 # vi /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://10.10.3.230,10.10.4.81,10.10.7.12

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

sql_mode=""
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=10.10.7.12
# Cluster name
wsrep_cluster_name=awsmonster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=db3

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=DISABLED

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth="sstuser:SST-PassWord"
 
Start the third node with the following command (attention this time as you can see is not in boostrap mode!!):
 root@db3:~# /etc/init.d/mysql start 
After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of the status from the third node (db3.awsmonster.com):
 mysql> show status like 'wsrep%'; 
If you encounter some problems, take a look at /var/log/syslog to look if all is ok. 

Testing replication 

To test the replication, lets create a new database on the second node, create a table for that database on the third node, and add some records to the table on the first node. 
  • Create a new database on the second node:
mysql@db2> CREATE DATABASE percona;
Query OK, 1 row affected (0.01 sec) 
  • Create a table on the third node:
 mysql@db3> USE percona;
Database changed

mysql@db3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.05 sec) 
  • Insert records on the first node:
mysql@db1> INSERT INTO percona.example VALUES (1, 'percona1');
Query OK, 1 row affected (0.02 sec) 
  • Retrieve all the rows from that table on the second node:
 mysql@db2> SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       1 | percona1  |
+---------+-----------+
1 row in set (0.00 sec) 

Reference: 

Comments