MySQL InnoDB Cluster (or only Group Replication) becomes more and more popular. This solution doesn’t attract only experts anymore. On social medias, forums and other discussions, people are asking me what it the best way to migrate a running environment using traditional asynchronous replication [Master -> Slave(s)] to InnoDB Cluster.
The following procedure is what I’m actually recommending. These steps have for objective to reduce the downtime to the minimum for the database service.
We can divide the procedure in 9 steps:
- the current situation
- preparing the future cluster
- data transfert
- replication from current system
- creation of the cluster with a single instance
- adding instances to the cluster
- configure the router
- test phase
- pointing the application to the new solution
1. the current situation
![]()
Our application connects to mysql1 which also acts as master for mysql2. mysql3 and mysql4 are spare servers that will be used for the new MySQL InnoDB Cluster.
The final architecture will be having a MySQL InnoDB Cluster group of 3 machines: mysql2, mysql3 and mysql4.
2. preparing the cluster
The current Master-Slave setup musts use GTID. (so at least using 5.6)
MySQL >= 5.7.17 must be used for the InnoDB Cluster group members.
Read the Group Replication’s requirements & limitations:
So, on mysql3 and mysql4, we only need to install MySQL >=5.7.17.
There are also two different approaches to create such cluster:
- create it manually and then use MySQL Shell to create the metadata needed by MySQL-router
- let’s do everything using MySQL Shell
We will of course use the second option.
3. data transfer
As now, the provisioning of a new member is like any other type of MySQL Replication when you need to provision a new slave, a manual operation. Use a backup !
Group Replication is “just” another type of MySQL Replication, therefor we need to use the same concept. Of course we understand that everybody would benefit from an automatic provisioning process, but we don’t have such solution.
The backup must be consistent and provide the GTID of the last transaction being part of the backup.
You can use any option you want, logical backup with mysqldump, physical backup with MEB or Xtrabackup, etc..
I will use MEB to illustrate the different operations.
backup:
Let’s take a backup on mysql1:
[mysql1 ~]# mysqlbackup --host=127.0.0.1 --backup-dir=/tmp/backup \
--user=root --password=X backup-and-apply-log
Of course we could have taken the backup from mysql2 too.
transfer:
We need to copy the backup from mysql1 to mysql3:
[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp
restore:
Be sure that mysqld is not running on mysql3:
[mysql3 ~]# systemctl stop mysqld
Now, it’s time to restore the backup on mysql3, this consists to a simple copy-back and ownership change:
[mysql3 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql3 ~]# rm /var/lib/mysql/mysql*-bin* # just some cleanup
[mysql3 ~]# chown -R mysql. /var/lib/mysql
4. replication from current system
At the end of this section we will then have just a normal asynchronous slave.
We need to verify MySQL’s configuration to be sure that my.cnf is configured properly to act as a slave:
[mysqld]
...
server_id=3
enforce_gtid_consistency = on
gtid_mode = on
log_bin
log_slave_updates
Let’s start mysqld:
[mysql3 ~]# systemctl start mysqld
Now, we need to find the latest GTIDs purged from the backup and set it. Then we will have to setup asynchronous replication and start it. We will then have live data from production replicated to this new slave.
The location where to find this information will depend of your backup solution.
Using MEB, the latest purged GTIDs are found in the file called backup_gtid_executed.sql :
[mysql3 ~]# cat /tmp/backup/meta/backup_gtid_executed.sql
# On a new slave, issue the following command if GTIDs are enabled:
SET @@GLOBAL.GTID_PURGED='33351000-3fe8-11e7-80b3-08002718d305:1-1002';
# Use the following command if you want to use the GTID handshake protocol:
# CHANGE MASTER TO MASTER_AUTO_POSITION=1;
Let’s connect to mysql on mysql3 and setup replication:
mysql> CHANGE MASTER TO MASTER_HOST="mysql1",
MASTER_USER="repl_async", MASTER_PASSWORD='Xslave',
MASTER_AUTO_POSITION=1;
mysql> RESET MASTER;
mysql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";
mysql> START SLAVE;
The credentials used for replication must be present in mysql1.
Using SHOW SLAVE STATUS; you should be able to see the new slave is replicating.
5. creation of the cluster with a single instance
It’s finally time to use MySQL Shell ! ![😉]()
You can run THE shell using mysqlsh:
[mysql3 ~]# mysqlsh
Now we can verify if our server is ready to become a member of a new cluster:
mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306':
Validating instance...
The instance 'mysql3:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable or restart the server |
| master_info_repository | FILE | TABLE | Restart the server |
| relay_log_info_repository | FILE | TABLE | Restart the server |
| transaction_write_set_extraction | OFF | XXHASH64 | Restart the server |
+----------------------------------+---------------+----------------+--------------------------------------------------+
Please fix these issues , restart the serverand try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
during this process, the configuration if parsed to see if all required settings are present.
By default, some settings are missing or need to be changed, we can ask the shell to perform the changes for us:
mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306':
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]: y
Validating instance...
The configuration has been updated but it is required to restart the server.
{
"config_errors": [
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
This command works only if you plan to modify the configuration of the local instance (as the name of the function tells it). So when you need to configure multiple members of a cluster, you need to connect to each nodes independently and locally to them.
As the command returned it, we need now to restart mysqld to enable the new configuration settings:
[mysql3 ~]# systemctl restart mysqld
We can now connect again with the shell, verify again the configuration and finally create the cluster:
mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password:
Your MySQL connection id is 6
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306':
Validating instance...
The instance 'mysql3:3306' is valid for Cluster usage
{
"status": "ok"
}
mysql-js> cluster = dba.createCluster('MyInnoDBCluster')
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
Creating InnoDB cluster 'MyInnoDBCluster' on 'root@mysql3:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
<Cluster:MyInnoDBCluster>
We can verify the status of our single node cluster using once again the shell:
mysql-js> cluster.status()
{
"clusterName": "MyInnoDBCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
We have one cluster that is running but which is of course not tolerant to any failure.
This is how our architecture looks like now:
![]()
6. adding instances to the cluster
The goal is to have a Cluster of 3 nodes (or a Group of 3 Members). Now we will add mysql4 using the same backup we used for mysql3 and using the same procedure.
transfer:
[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp
restore:
[mysql4 ~]# systemctl stop mysqld
[mysql4 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql4 ~]# rm /var/lib/mysql/mysql*-bin* # just some cleanup
[mysql4 ~]# chown -R mysql. /var/lib/mysql
This time, no need to modify the configuration manually, we will later use the shell for that. So we can simply start mysqld:
[mysql4 ~]# systemctl start mysqld
Let’s use the shell to join the Group:
mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password:
Your MySQL connection id is 27
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set on
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...
The instance 'mysql4:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable or restart the server |
| enforce_gtid_consistency | OFF | ON | Restart the server |
| gtid_mode | OFF | ON | Restart the server |
| log_bin | 0 | 1 | Restart the server |
| log_slave_updates | 0 | ON | Restart the server |
| master_info_repository | FILE | TABLE | Restart the server |
| relay_log_info_repository | FILE | TABLE | Restart the server |
| transaction_write_set_extraction | OFF | XXHASH64 | Restart the server |
+----------------------------------+---------------+----------------+--------------------------------------------------+
Please fix these issues , restart the server and try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
We can let the shell configure it:
mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306':
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]: y
Validating instance...
The configuration has been updated but it is required to restart the server.
{
"config_errors": [
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
Restart the service to enable the changes:
[mysql4 ~]# systemctl restart mysqld
We will use again the same purged GTIDs as previously (remember it’s in /tmp/backup/meta/backup_gtid_executed.sql):
mysql-js> \c root@mysql4:3306
mysql-js> \sql
mysql-sql> RESET MASTER;
mysql-sql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";
We used the shell to illustrate how SQL mode can be used with it too.
We are ready to add mysql4 to the Group:
mysql-sql> \js
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...
The instance 'mysql4:3306' is valid for Cluster usage
{
"status": "ok"
}
Now we need to connect to a node that is already member of the group to load the cluster object (get the metadata of the cluster):
mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password:
Your MySQL connection id is 29
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
mysql-js> cluster = dba.getCluster()
<Cluster:MyInnoDBCluster>
Now we can check if the node that we want to add is consistent with the transactions that have been applied (verify the GTIDs):
mysql-js> cluster.checkInstanceState('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Analyzing the instance replication state...
The instance 'mysql4:3306' is valid for the cluster.
The instance is fully recoverable.
{
"reason": "recoverable",
"state": "ok"
}
This is perfect, we can add the new member (mysql4):
mysql-js> cluster.addInstance("root@mysql4:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'root@mysql4:3306':
Adding instance to the cluster ...
The instance 'root@mysql4:3306' was successfully added to the cluster.
Let’s verify this:
mysql-js> cluster.status()
{
"clusterName": "MyInnoDBCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
Great !
Now let’s add mysql2, where we don’t need any backup as the data is already present.
The first step on mysql2 is to stop the running slave thread(s) (io and sql) and then forget completely about this asynchronous replication:
mysql2 mysql> stop slave;
mysql2 mysql> reset slave all;
It’s the right moment to add it on the cluster using mysql shell, first we need to check the configuration:
[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql2:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> dba.configureLocalInstance()
[mysql2 ~]# systemctl restart mysqld
and now we can add mysql2 to the cluster:
[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql3:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.addInstance("root@mysql2:3306")
mysql-js> cluster.status()
{
"clusterName": "MyInnoDBCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
It’s possible to run dba.configureLocalInstance() on a running node to add the group replication settings to my.cnf
7. configure the router
At this point the architecture looks like this:
![]()
Let’s configure mysql-router on mysql1, in fact, the router has the capability to bootstrap itself using the cluster’s metadata. It only needs to access one of the member:
[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter
Please enter MySQL password for root:
WARNING: The MySQL server does not have SSL configured and metadata used by the router may be transmitted unencrypted.
Bootstrapping system MySQL Router instance...
MySQL Router has now been configured for the InnoDB cluster 'MyInnoDBCluster'.
The following connection information can be used to connect to the cluster.
Classic MySQL protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
X protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
The configuration is done and the router will listen on 4 ports after starting it:
[mysql1 ~]# systemctl start mysqlrouter
8. test phase
Now you can check your cluster, how fast it can process the replication, test some read queries, etc…
As soon as you are happy with your test, you just need to point the application to the router and it’s done ! ![😉]()
9. pointing the application to the new solution
Pointing the application to the router is the only downtime in our story which is very fast.
This is the final architecture:
![]()
Enjoy MySQL InnoDB Cluster !