Quantcast
Channel: lefred blog: tribulations of a MySQL Evangelist
Viewing all 411 articles
Browse latest View live

How to backup your InnoDB Cluster ?

$
0
0

MySQL InnoDB is more and more popular. The adoption of it is even faster than I expected. Recently, during my travel in Stockholm, Sweden, a customer asked me what was the best practice to backup a cluster.

Since my interlocutor was a customer, the obvious choice is to use MySQL Enterprise Backup (known as MEB). Of course any other physical backup should be also fine.

The customer told me he was using cron to schedule his backup and was only using full backups… That’s perfect. So I told him that there is nothing complicated and that the cron job should something like:

mysqlbackup --with-timestamp --backup-dir /backup  backup

Of course, I do not recommend the use of --user clusteradmin --password=xxxxxin the cronjob but configure your crendentials using mysql_config_editor.

The customer answered me he didn’t want to backup every node each time and since MySQL InnoDB Cluster retains data consistency, making a backup of one member should be enough (he wanted to spare some disk space).

He was completely right. So I advised him to backup only one member, and I would recommend to use a secondary master.

Then he told me that this is what it was doing… but what will happen if the node where the backup should run is down ? or has a problem ?

And once again, he was right !

So, for making the perfect backup for MySQL InnoDB Cluster, our script should perform the following steps:

  • check if the node where the script run is part of the cluster
  • check if the node is indeed a secondary master
  • eventually check if the node is lagging behind (large apply queue)
  • ensure that the backup is not running on another member

Therefore, I wrote this small bash script that can be scheduled on every members of the MySQL InnoDB Cluster. The script benefits of the new Group Replication consistency to ensure the backup is running on one member only.

Let’s see the script in action (instead of running the backup via cron, I start in at the same time on each nodes using the command line):

You can download the script here and of course this is only an example and use it at your own risk:

Note that this solution only works with full backups. Incremental or differential backups might be corrupted when mixing different severs.

The following query in the backup_historytable gives you also an overview where the backup was taken:

mysql> SELECT backup_type, end_lsn, exit_state, member_host, member_role
FROM mysql.backup_history JOIN
performance_schema.replication_group_members ON member_id=server_uuid

Output example:

 +-------------+----------+------------+-------------+-------------+
| backup_type | end_lsn | exit_state | member_host | member_role |
+-------------+----------+------------+-------------+-------------+
| FULL | 19571113 | SUCCESS | mysql1 | PRIMARY |
| FULL | 19756413 | SUCCESS | mysql3 | SECONDARY |
| FULL | 19757757 | SUCCESS | mysql3 | SECONDARY |
| FULL | 19861759 | SUCCESS | mysql3 | SECONDARY |
| FULL | 19821154 | SUCCESS | mysql1 | PRIMARY |
+-------------+----------+------------+-------------+-------------+
5 rows in set (0.00 sec)

I don’t consider myself as an expert in MEB but if you have any question, you can use our popular forums, our Community Slack or by leaving a comment here.

Update: there was a small bug in the first version of the script when a node was online but partitioned. This has been resolved and updated. Thanks Ted for the good catch !


Friday Feb 1st it is MySQL Day !

$
0
0

We are less than 48h before the more and more popular pre-FOSDEM MySQL Day !

Unfortunately one of our speaker won’t be able to deliver his talk. Indeed, Giuseppe had ton cancel is talk on containers (Automating MySQL operations with containers) but he will be present during the day and during the Community Dinner, so if you have questions, I’m sure he will gladly answer them.

So we have replace this great speaker by another great one: Shlomi Noach !

Shlomi will present a very new session: Un-split brain (aka Move Back in Time) MySQL.

Here is the updated agenda:

Start End Event Speaker Company Topic

Friday 1st February

09:30 10:00 MySQL Community Team Welcome
10:00 10:30 MySQL Shell – A DevOps-engineer day with MySQL’s development and administration tool Miguel Araújo Oracle MySQL Shell
10:35 11:05 MySQL Shell : the best DBA tool ? – How to use the MySQL Shell as a framework for DBAs Frédéric Descamps Oracle MySQL Shell
11:05 11:25 Coffee Break
11:25 11:55 Tuning MySQL 8.0 InnoDB for High Load Dimitri Kravtchuk Oracle MySQL 8.0
12:00 12:30 MySQL 8.0: advance tuning with Resource Group Marco Tusa Percona MySQL 8.0
12:35 13:30 Lunch Break
13:30 14:00 New index features in MySQL 8.0 Erik Frøseth Oracle MySQL 8.0
14:05 14:35 Optimizer Histograms: When they Help and When Do Not? Sveta Smirnova Percona MySQL 8.0
14:40 15:10 Regular expressions with full Unicode support – The ins and outs of the new regular expression functions and the ICU library Martin Hansson Oracle MySQL 8.0
15:15 15:40 Coffee Break
15:40 16:10 Mirroring MySQL traffic with ProxySQL: use cases René Cannaò ProxySQL ProxySQL
16:15 16:45 Un-split brain (aka Move Back in Time) MySQL Shlomi Noach Github Binlogs
16:50 17:20 8 Group Replication Features That Will Make You Smile Tiago Vale Oracle Replication
17:25 17:55 Document Store & PHP David Stokes Oracle Document Store

This event is SOLD OUT therefor, unregistered person won’t be allowed to attend. Don’t forget your ticket.

MySQL Shell : the best DBA tool?

$
0
0

Last week I presented the following session at the pre-FOSDEM MySQL Day:

The audience seemed very interested on how the MySQL Shell can be extended.

During the presentation I showed how I extended the MySQL Shell with two new modules in Python:

Both projects are on github and are waiting for ideas, feature requests, pull requests, …

Here is the video of the Innotop module as during the presentation:

I hope you will enjoy the MySQL Shell even more and that you will start contributing to these modules.

pre-FOSDEM MySQL Day 2019 – slides

$
0
0

This event was just awesome. We got 110 participants ! Thank you everybody and also a big thank to the speakers.

Here are the slides of all the sessions:

I will add the 2 missing slide-decks as soon as I receive them.

As this day was a really success for the MySQL Community, we plan of course to organize it again next year and maybe add a second conference room to increase the amount of participant as we had to refuse people.

Oracle Open World 2019 – CodeONE Call For Paper

$
0
0

The Oracle Open World 2019 Call For Paper is open until March 13th.

MySQL track will be part of CodeONE, the parallel conference focused on developers.

We encourage you to submit a session related to the following topics:

  • case studies / user stories of your MySQL usage
  • lessons learned in running web scale MySQL
  • production DBA/devops perspectives into MySQL Architecture, Performance, Replication, InnoDB, Security, …
  • Migration to MySQL
  • MySQL 8.0 (Document Store, InnoDB Cluster, new Data Dictionary, …)

Don’t miss the chance to participate to this amazing event. Submit now() here !

MySQL InnoDB Cluster – howto install it from scratch

$
0
0

MySQL InnoDB Cluster is evolving very nicely. I realized that the MySQL Shell also improved a lot and that it has never been so easy to setup a cluster on 3 new nodes.

This is a video of the updated procedure on how to install MySQL InnoDB Cluster on GNU Linux rpm based (Oracle Linux, RedHat, CentOS, Fedora, …)

MySQL InnoDB Cluster – consistency levels

$
0
0

Consistency during reads have been a small concern from the adopters of MySQL InnoDB Cluster (see this post and this one).

This is why MySQL supports now (since 8.0.14) a new consistency model to avoid such situation when needed.

Nuno Carvalho and Aníbal Pinto already posted a blog series I highly encourage you to read:

After those great articles, let’s check how that does work with some examples.

The environment

This is how the environment is setup:

  • 3 members: mysql1, mysql2 & mysql3
  • the cluster runs in Single-Primay mode
  • mysql1 is the Primary Master
  • some extra sys views are installed

Example 1 – EVENTUAL

This is the default behavior (group_replication_consistency='EVENTUAL'). The scenario is the following:

  • we display the default value of the session variable controlling the Group Replication Consistency on the Primary and on one Secondary
  • we lock a table on a Secondary master (mysql3) to block the apply of the transaction coming from the Primary
  • we demonstrate that even if we commit a new transaction on mysql1, we can read the table on mysql3 and the new record is missing (the write could not happen due to the lock)
  • once unlocked, the transaction is applied and the record is visible on the Secondary master (mysql3) too.

Example 2 – BEFORE

In this example, we will illustrate how we can avoid inconsistent reads on a Secondary master:

As you could notice, once we have set the session variable controlling the consistency, operations on the table (the server is READ-ONLY) are waiting for the Apply Queue to be empty before returning the result set.

We could also notice that the wait time (timeout) for this read operation is very long (8 hours by default) and can be modified to a shorter period:

We used SET wait_timeout=10 to define it to 10 seconds.

When the timeout is reached, the following error is returned:

ERROR: 3797: Error while waiting for group transactions commit on group_replication_consistency= 'BEFORE'

Example 3 – AFTER

It’s also possible to return from commit on the writer only when all members applied the change too. Let’s check this in action too:

This can be considered as synchronous writes as the return from commit happens only when all members have applied it. However you could also notice that in this consistency level, wait_timeout has not effect on the write. In fact wait_timeout has only effect on read operations when the consistency level is different than EVENTUAL.

This means that this can lead to several issues if you lock a table for any reason. If the DBA needs to perform some maintenance operations and requires to lock a table for a long time, it’s mandatory to not operate queries in AFTER or BEFORE_AND_AFTERwhile in such maintenance.

Example 4 – Scope

In the following video, I just want to show you the “scope” of these “waits” for transactions that are in the applying queue.

We will lock again t1 but on a Secondary master, we will perform a SELECT from table t2, the first time we will keep the default value of group_replication_consistency(EVENTUAL) and the second time we will change the consistency level to BEFORE :

We could see that as soon as they are transactions in the apply queue, if you change the consistency level to something BEFORE, it needs to wait for the previous transactions in the queue to be applied even if those events are related or not to the same table(s) or record(s). It doesn’t matter.

Example 5 – Observability

Of course it’s possible to check what’s going on and if queries are waiting for something.

BEFORE

When group_replication_consistency is set to BEFORE (or includes it), while a transaction is waiting for the applying queue to be committed, it’s possible to track those waiting transactions by running the following query:

SELECT * FROM information_schema.processlist 
WHERE state='Executing hook on transaction begin.';

AFTER

When group_replication_consistency is set to AFTER (or includes it), while a transaction is waiting for the transaction to be committed on the other members too, it’s possible to track those waiting transactions by running the following query:

SELECT * FROM information_schema.processlist 
WHERE state='waiting for handler commit';

It’s also possible to have even more information joining the processlist and InnoDB Trx tables:

SELECT *, TIME_TO_SEC(TIMEDIFF(now(),trx_started)) lock_time_sec 
FROM information_schema.innodb_trx JOIN information_schema.processlist
ON processlist.ID=innodb_trx.trx_mysql_thread_id
WHERE state='waiting for handler commit' ORDER BY trx_started\G

Conclusion

This consistency level is a wonderful feature but it could become dangerous if abused without full control of your environment.

I would avoid to set anything AFTER globally if you don’t control completely your environment. Table locks, DDLs, logical backups, snapshots could all delay the commits and transactions could start pilling up on the Primary Master. But if you control your environment, you have now the complete freedom to control completely the consistency you need on your MySQL InnoDB Cluster.

Migrate from MariaDB to MySQL on CentOS

$
0
0

On this article, I will show you how to migrate your wordpress database from the MariaDB on CentOS to the real MySQL.

Why migrating to MySQL 8.0 ?

MySQL 8.0 brings a lot of new features. These features make MySQL database much more secure (like new authentication, secure password policies and management, …) and fault tolerant (new data dictionary), more powerful (new redo log design, less contention, extreme scale out of InnoDB, …), better operation management (SQL Roles, instant add columns), many (but really many!) replication enhancements and native group replication… and finally many cool stuff like the new Document Store, the new MySQL Shell and MySQL InnoDB Cluster that you should already know if you follow this blog (see these TOP 10 for features for developers and this TOP 10 for DBAs & OPS).

Starting Situation

So first before we do our upgrade, let’s verify what we have:

> select version();
+----------------+
| version() |
+----------------+
| 5.5.60-MariaDB |
+----------------+
1 row in set (0.01 sec)

Let’s verify the tables we also have:

> show tables;
+-----------------------+
| Tables_in_wp |
+-----------------------+
| wp_commentmeta |
| wp_comments |
| wp_links |
| wp_options |
| wp_postmeta |
| wp_posts |
| wp_term_relationships |
| wp_term_taxonomy |
| wp_termmeta |
| wp_terms |
| wp_usermeta |
| wp_users |
+-----------------------+
12 rows in set (0.00 sec)

And in /var/lib/mysqlwe can see:

[root@mysql4 mysql]# ls -l
total 28704
-rw-rw----. 1 mysql mysql 16384 Mar 25 20:08 aria_log.00000001
-rw-rw----. 1 mysql mysql 52 Mar 25 20:08 aria_log_control
-rw-rw----. 1 mysql mysql 18874368 Mar 25 20:12 ibdata1
-rw-rw----. 1 mysql mysql 5242880 Mar 25 20:12 ib_logfile0
-rw-rw----. 1 mysql mysql 5242880 Mar 25 20:08 ib_logfile1
drwx------. 2 mysql mysql 4096 Mar 25 20:08 mysql
srwxrwxrwx. 1 mysql mysql 0 Mar 25 20:08 mysql.sock
drwx------. 2 mysql mysql 4096 Mar 25 20:08 performance_schema
drwx------. 2 mysql mysql 6 Mar 25 20:08 test
drwx------. 2 mysql mysql 4096 Mar 25 20:11 wp
[root@mysql4 mysql]# ls -l wp
total 164
-rw-rw----. 1 mysql mysql 65 Mar 25 20:08 db.opt
-rw-rw----. 1 mysql mysql 8688 Mar 25 20:11 wp_commentmeta.frm
-rw-rw----. 1 mysql mysql 13380 Mar 25 20:11 wp_comments.frm
-rw-rw----. 1 mysql mysql 13176 Mar 25 20:11 wp_links.frm
-rw-rw----. 1 mysql mysql 8698 Mar 25 20:11 wp_options.frm
-rw-rw----. 1 mysql mysql 8682 Mar 25 20:11 wp_postmeta.frm
-rw-rw----. 1 mysql mysql 13684 Mar 25 20:11 wp_posts.frm
-rw-rw----. 1 mysql mysql 8682 Mar 25 20:11 wp_termmeta.frm
-rw-rw----. 1 mysql mysql 8666 Mar 25 20:11 wp_term_relationships.frm
-rw-rw----. 1 mysql mysql 8668 Mar 25 20:11 wp_terms.frm
-rw-rw----. 1 mysql mysql 8768 Mar 25 20:11 wp_term_taxonomy.frm
-rw-rw----. 1 mysql mysql 8684 Mar 25 20:11 wp_usermeta.frm
-rw-rw----. 1 mysql mysql 13064 Mar 25 20:11 wp_users.frm

We can see that we have .frm files and one InnoDB table space: ibdata1.

> show global variables like 'innodb_file_per_table';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_file_per_table | OFF |
+-----------------------+-------+

We can also see some aria files.

Now it’s time to upgrade to the official and original MySQL. We would like to use MySQL 8.0 of course, but the default version of MariaDB in CentOS is very old and MySQL 8.0 requires newer version of InnoDB files (undo logs, …). There for we will move first to the latest MySQL 5.7 and then to MySQL 8.0.

MySQL 5.7

The first step is to install the yum repository for MySQL Community:

# yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-2.noarch.rpm

Now we can stop properly the system and then upgrade the packages.

When upgrading a MySQL system, I always recommend before stopping mysqldto set innodb_fast_shutdown to 0. This will force the dirty pages in the InnoDB Buffer Pool to be written to disk and bypass InnoDB Recovery from undo logs at the mysqld’s boot process.

> set global innodb_fast_shutdown=0;
Query OK, 0 rows affected (0.01 sec)

# systemctl stop mariadb

And now we can install MySQL 5.7 (we enable 5.7 repo and disable 8.0 repo):

# yum install -y mysql-community-server mysql-community-client \
--enablerepo=mysql57-community --disablerepo=mysql80-community

We can see that MySQL 5.7 is now replacing the old MariaDB:

...
Installed:
mysql-community-client.x86_64 0:5.7.25-1.el7
mysql-community-libs.x86_64 0:5.7.25-1.el7
mysql-community-libs-compat.x86_64 0:5.7.25-1.el7
mysql-community-server.x86_64 0:5.7.25-1.el7
Dependency Installed:
mysql-community-common.x86_64 0:5.7.25-1.el7
Replaced:
mariadb.x86_64 1:5.5.60-1.el7_5 mariadb-libs.x86_64 1:5.5.60-1.el7_5
mariadb-server.x86_64 1:5.5.60-1.el7_5
Complete!

We can start mysqld run the mysql_upgrade process:

# systemctl start mysqld
# mysql_upgrade

You should get some non fatal errors related on corrupted tables that mysql_upgradefixes.

After any mysql_upgradeit’s always advised to restart MySQL

You can visite your worpress site, it’s perfectly working:

MySQL 8.0

upgrade checker utility

The following step is not mandatory but highly recommended. Before upgrading to MySQL 8.0, you should install the new MySQL Shell and see the new upgrade checker tool in action ! (see also this previous article)

# yum install -y mysql-shell

Please note that you should always use the latest MySQL Shell independently of your MySQL version. In this case I’m using MySQL 5.7.25 and MySQL Shell 8.0.15.

# mysqlsh root@localhost
MySQL JS > util.checkForServerUpgrade()
...

1) Usage of old temporal type
No issues found
2) Usage of db objects with names conflicting with reserved keywords in 8.0
No issues found
3) Usage of utf8mb3 charset
No issues found
....
Errors: 0
Warnings: 1
Notices: 0
No fatal errors were found that would prevent an upgrade, but some potential issues
were detected. Please ensure that the reported issues are not significant before upgrading.

We don’t have any incompatibilities with MySQL 8.0 we can then install it without worry

MySQL 8.0 installation

Now, if you want to keep upgrading to MySQL 8.0. Just stop again mysqld as previously and install MySQL 8.0 binaries:

# mysql -e 'set global innodb_fast_shutdown=0';
# systemctl stop mysqld

Let’s run the new binaries installation:

# yum upgrade -y mysql-community-server mysql-community-client
...
Updated:
mysql-community-client.x86_64 0:8.0.15-1.el7
mysql-community-server.x86_64 0:8.0.15-1.el7
Dependency Updated:
mysql-community-common.x86_64 0:8.0.15-1.el7
mysql-community-libs.x86_64 0:8.0.15-1.el

Now we can start again MySQL and run mysql_upgrade (won’t be required anymore from 8.0.16):

# systemctl start mysqld
# mysql_upgrade

We can finally restart mysqld for the last time and enjoy again our WordPress site using MySQL 8.0!

Final check

We can also now verify MySQL’s datadir and see that now .frm files are gone as MySQL 8.0 uses the new Data Dictionary:

# ls -lh wp/
total 656K
-rw-r-----. 1 mysql mysql 192K Mar 25 21:13 wp_comments.ibd
-rw-r-----. 1 mysql mysql 128K Mar 25 21:13 wp_links.ibd
-rw-r-----. 1 mysql mysql 176K Mar 25 21:15 wp_posts.ibd
-rw-r-----. 1 mysql mysql 160K Mar 25 21:13 wp_users.ibd

We can notice that not all tables have their own tablespace. This can be verified with the following query:

mysql> select NAME, ROW_FORMAT, SPACE_TYPE from INNODB_TABLES where name like 'wp/%';
+--------------------------+------------+------------+
| NAME | ROW_FORMAT | SPACE_TYPE |
+--------------------------+------------+------------+
| wp/wp_commentmeta | Compact | System |
| wp/wp_comments | Dynamic | Single |
| wp/wp_links | Dynamic | Single |
| wp/wp_options | Compact | System |
| wp/wp_postmeta | Compact | System |
| wp/wp_posts | Dynamic | Single |
| wp/wp_term_relationships | Compact | System |
| wp/wp_term_taxonomy | Compact | System |
| wp/wp_termmeta | Compact | System |
| wp/wp_terms | Compact | System |
| wp/wp_usermeta | Compact | System |
| wp/wp_users | Dynamic | Single |
+--------------------------+------------+------------+
12 rows in set (0.24 sec)

Summary

To upgrade from MariaDB to MySQL you need to perform the follow simple steps:

  • stop MariaDB’s mysqld process
  • install the binary files of 5.7
  • start mysqld & run mysqld_upgrade
  • run MySQL Shell’s upgrade checker utility
  • stop mysqld
  • upgrade the binaries to MySQL 8.0
  • start mysqld & run mysql_upgrade and restart mysqldif <8.0.16
  • just start mysqld if >= 8.0.16


Replace MariaDB 10.3 by MySQL 8.0

$
0
0

Why migrating to MySQL 8.0 ?

MySQL 8.0 brings a lot of new features. These features make MySQL database much more secure (like new authentication, secure password policies and management, …) and fault tolerant (new data dictionary), more powerful (new redo log design, less contention, extreme scale out of InnoDB, …), better operation management (SQL Roles, instant add columns), many (but really many!) replication enhancements and native group replication… and finally many cool stuff like the new Document Store, the new MySQL Shell and MySQL InnoDB Cluster that you should already know if you follow this blog (see these TOP 10 for features for developers and this TOP 10 for DBAs & OPS).

Not anymore a drop in replacement !

We saw in this previous post how to migrate from MariaDB 5.5 (default on CentOS/RedHat 7) to MySQL. This was a straight forward migration as at the time MariaDB was a drop in replacement for MySQL…but this is not the case anymore since MariaDB 10.x !

Lets get started with the migration to MySQL 8.0 !

Options

Two possibilities are available to us:

  1. Use logical dump for schemes and data
  2. Use logical dump for schemes and transportable InnoDB tablespaces for the data

Preparing the migration

Option 1 – full logical dump

It’s recommended to avoid to have to deal with mysql.* tables are they won’t be compatible, I recommend you to save all that information and import the required entries like users manually. It’s maybe the best time to do some cleanup.

As we are still using our WordPress site to illustrate this migration. I will dump the wp database:

# mysqldump -B wp > wp.sql

MariaDB doesn’t provide mysqlpump, so I used the good old mysqldump. There was a nice article this morning about MySQL logical dump solutions, see it here.

Option 2 – table design dump & transportable InnoDB Tables

First we take a dump of our database without the data (-d):

# mysqldump -d -B wp > wp_nodata.sq

Then we export the first table space:

[wp]> flush tables wp_comments for export;
Query OK, 0 rows affected (0.008 sec

We copy it to the desired location (the .ibd and the .cfg):

# cp wp/wp_comments.ibd ~/wp_innodb/
# cp wp/wp_comments.cfg ~/wp_innodb/

And finally we unlock the table:

[wp]> unlock tables;

These operation above need to be repeated for all the tables ! If you have a large amount of table I encourage you to script all these operations.

Replace the binaries / install MySQL 8.0

Unlike previous version, if we install MySQL from the Community Repo as seen on this post, MySQL 8.0 won’t be seen as a conflicting replacement for MariaDB 10.x. To avoid any conflict and installation failure, we will replace the MariaDB packages by the MySQL ones using the swap command of yum:

# yum swap -- install mysql-community-server mysql-community-libs-compat -- \ 
remove MariaDB-server MariaDB-client MariaDB-common MariaDB-compat

This new yum command is very useful, and allow other dependencies like php-mysql or postfix for example to stay installed without breaking some dependencies

The result of the command will be something similar to:

Removed:
MariaDB-client.x86_64 0:10.3.13-1.el7.centos
MariaDB-common.x86_64 0:10.3.13-1.el7.centos
MariaDB-compat.x86_64 0:10.3.13-1.el7.centos
MariaDB-server.x86_64 0:10.3.13-1.el7.centos
Installed:
mysql-community-libs-compat.x86_64 0:8.0.15-1.el7
mysql-community-server.x86_64 0:8.0.15-1.el7
Dependency Installed:
mysql-community-client.x86_64 0:8.0.15-1.el7
mysql-community-common.x86_64 0:8.0.15-1.el7
mysql-community-libs.x86_64 0:8.0.15-1.el7

Now the best is to empty the datadir and start mysqld:

# rm -rf /var/lib/mysql/*
# systemctl start mysq

This will start the initialize process and start MySQL.

As you may know, by default MySQL is now more secure and a new password has been generated to the root user. You can find it in the error log (/var/log/mysqld.log):

2019-03-26T12:32:14.475236Z 5 [Note] [MY-010454] [Server] 
A temporary password is generated for root@localhost: S/vfafkpD9a

At first login with the root user, the password must be changed:

# mysql -u root -p
mysql> set password='Complicate1#'

Adding the credentials

Now we need to create our database (wp), our user and its credentials.

Please, note that the PHP version used by default in CentOS might now be yet compatible with the new default secure authentication plugin, therefor we will have to create our user with the older authentication plugin, mysql_native_password. For more info see these posts:

Migrating to MySQL 8.0 without breaking old application

Drupal and MySQL 8.0.11 – are we there yet ?

Joomla! and MySQL 8.0.12

PHP 7.2.8 & MySQL 8.0

mysql> create user 'wp'@'127.0.0.1' identified with 
'mysql_native_password' by 'fred';

By default, this password (fred) won’t be allowed with the default password policy.

To not have to change our application, it’s possible to override the policy like this:

mysql> set global validate_password.policy=LOW;
mysql> set global validate_password.length=4


It’s possible to see the user and its authentication plugin easily using the following query:

mysql> select Host, User, plugin,authentication_string from mysql.user where User='wp';
+-----------+------+-----------------------+-------------------------------------------+
| Host | User | plugin | authentication_string |
+-----------+------+-----------------------+-------------------------------------------+
| 127.0.0.1 | wp | mysql_native_password | *6C69D17939B2C1D04E17A96F9B29B284832979B7 |
+-----------+------+-----------------------+-------------------------------------------+

We can now create the database and grant the privileges to our user:

mysql> create database wp;
Query OK, 1 row affected (0.00 sec)
mysql> grant all privileges on wp.* to 'wp'@'127.0.0.1';
Query OK, 0 rows affected (0.01 sec)

Restore the data

This process is also defined by the options chosen earlier.

Option 1

This option, is the most straight forward, one restore and our site is back online:

# mysql -u wp -pfred wp <~/wp.sql

Option 2

This operation is more complicated as it requires more steps.

First we will have to restore all the schema with no data:

# mysql -u wp -pfred wp <~/wp_nodata.sql

And now for every tables we need to perform the following operations:

mysql> alter table wp_posts discard tablespace;

# cp ~/wp_innodb/wp_posts.ibd /var/lib/mysql/wp/
# cp ~/wp_innodb/wp_posts.cfg /var/lib/mysql/wp/
# chown mysql. /var/lib/mysql/wp/wp_posts.*

mysql> alter table wp_posts import tablespace

Yes, this is required for all tables, this is why I encourage you to script it if you choose this option.

Conclusion

So as you could see, it’s still possible to migrate from MariaDB to MySQL but since 10.x, this is not a drop in replacement anymore and requires several steps including logical backup.

Ripple Binlog Server for MySQL

$
0
0

Today I started to check ripple the new MySQL binlog server. I don’t want to give yet any feedback neither I want to answer THE question: Is this the Binlog Server we were all waiting for ?

But I had some difficulties to build it on my test machines (rpm based as mostly everybody knows). I think this might be a limitation for people wanting to evaluate it. Therefor, with the help of Daniël van Eeden, I made a rpm that will facilitate the installation on your system.

The rpm is made for EL7 compatible Linux distributions and includes libssl and libcrypto libraries in version 1.1

Enjoy MySQL Replication !

MySQL InnoDB Cluster – how to manage a split-brain situation

$
0
0

Everywhere I go to present MySQL InnoDB Cluster, during the demo of creating a cluster, many people doesn’t understand why when I’ve 2 members, my cluster is not yet tolerant to any failure.

Indeed when you create a MySQL InnoDB Cluster, as soon as you have added your second instance, you can see in the status:

    "status": "OK_NO_TOLERANCE",      
"statusText": "Cluster is NOT tolerant to any failures.",

Quorum

Why is that ? It’s because, to be part of primary partition (the partition that holds the service, the one having a Primary-Master in Single Primary Mode, the default mode), your partition must reach the majority of nodes (quorum). In MySQL InnoDB Cluster (and many other cluster solutions), to achieve quorum, the amount of members in a partition must be > (bigger) than 50%.

So when we have 2 nodes, if there is a network issue between the two servers, the cluster will be split in 2 partitions. And each of it will have 50% of the amount of total members (1 of 2). Is 50% > than 50% ?? No! That’s why none of the partition will reach quorum and none will allow queries in case of MySQL InnoDB Cluster.

Indeed, the first machine will see that it won’t be able to reach the second machine anymore… but why ? Is the second machine who died ? Am I having network interface issues ? We don’t know, so we cannot decide.

Let’s take a look at this cluster of 3 members (3/3 = 100%):


If we take a look in the cluster.status()output, we can see that with 3 nodes we can tolerate one failure:

    "status": "OK",      
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",

Now let’s imagine we have a network issue that will isolate one of the members:

We can see in cluster.status()output that the node is missing:

Our cluster will still be able to serve transactions has one partition still has quorum (2/3 = 66%, which is bigger than 50%).

        "mysql6:3306": {
"address": "mysql6:3306",
"mode": "n/a",
"readReplicas": {},
"role": "HA",
"status": "(MISSING)"
}

There is a very important concept I want to cover as this is not always obvious. The cluster is different in InnoDB Cluster and in Group Replication. Indeed, InnoDB Cluster relies on metadata created by the DBA using the MySQL Shell. Those metadata describes how the cluster has been set up. Group Replication sees the cluster differently. It sees it as it was last time it checked and how it’s right now… and updates that view. This is commonly called, the view of the world.

So in the example above, InnoDB Cluster sees 3 nodes: 2 online and 1 missing. For Group Replication, for a short moment, the partitioned node was UNREACHABLE and few second later, after being ejected from the Group by the majority (so only if there is still a majority), the node is not part of the cluster anymore. The Group size is now 2 of 2 (2/2 not 2/3). This information is exposed via performance_schema.replication_group_members

If our network issue would have been more serious and would split our cluster in 3 like the picture below, the cluster would be “offline” as none of the 3 partition would have reached quorum majority, 1/3 = 33% (<50%):

In this case the MySQL service won’t work properly until a human fixes the situation.

Fixing the situation

When there is no more primary partition in the cluster (like the example above), the DBA needs to restore the service. And as usual, there is already some information in the MySQL error log:

2019-04-10T13:34:09.051391Z 0 [Warning] [MY-011493] [Repl] Plugin group_replication 
reported: 'Member with address mysql4:3306 has become unreachable.'
2019-04-10T13:34:09.065598Z 0 [Warning] [MY-011493] [Repl] Plugin group_replication
reported: 'Member with address mysql5:3306 has become unreachable.'
2019-04-10T13:34:09.065615Z 0 [ERROR] [MY-011495] [Repl] Plugin group_replication
reported: 'This server is not able to reach a majority of members in
the group. This server will now block all updates. The server will
remain blocked until contact with the majority is restored. It is
possible to use group_replication_force_members to force a new group
membership.

From the message, we can see that this is exactly the situation we are explaining here. We can see in cluster.status()that the cluster is “blocked” :

    "status": "NO_QUORUM",      
"statusText": "Cluster has no quorum as visible from 'mysql4:3306'
and cannot process write transactions.
2 members are not active",

We have two solutions to fix the problem:

  1. using SQL and Group Replication variables
  2. using the MySQL Shell’s adminAPI

Fixing using SQL and Group Replication variables

This process is explained in the manual (Group Replication: Network Partitioning).

On the node the DBA wants to use to restore the service, if there is only one node left we can use the global variable group_replication_force_members and use the GCS address of the server that you can find in group_replication_local_address(if there are multiple servers online but not reaching the majority, all should be added to this variable):

SQL set global group_replication_force_members=@@group_replication_local_address;

Be careful that the best practice is to shutdown the other nodes to avoid any kind of conflicts if they reappear during the process of forcing quorum.

And the cluster will be again available. We can see in the error log that the situation has been resolved:

2019-04-10T14:41:15.232078Z 0 [Warning] [MY-011498] [Repl] Plugin group_replication 
reported:
'The member has resumed contact with a majority of the members in the group.
Regular operation is restored and transactions are unblocked.'

Don’t forget to remove the value of group_replication_force_members when you are back online:

SQL set global group_replication_force_members='';

When the network issue are resolved, the nodes will try to reconnect but has we forced the membership, those nodes will be rejected. You will need to rejoin the Group by:

  • restarting mysqld
  • or restarting again group replication (stop group_replication; start group_replication)
  • or using MySQL Shell (cluster.rejoinInstance())

Using the MySQL Shell’s adminAPI

The other option is to use the adminAPI from the MySQL Shell. This is the preferable option of course ! With the AdminAPI you don’t even need to know the port used for GCS to restore the quorum.

In the example below, we will use the server called mysql4 to re-activate our cluster:

JS cluster.forceQuorumUsingPartitionOf('clusteradmin@mysql4') 

And when the network issues are resolved, the Shell can also be used to rejoin other instances (in this case mysql6) :

JS cluster.rejoinInstance('clusteradmin@mysql6')

Conclusion

When for any reason you have lost quorum on your MySQL InnoDB Cluster, don’t panic. Choose the node (or list of nodes that can still communicate with each others) you want to use and if possible shutdown or stop mysqld on the other ones. Then MySQL Shell is again your friend and use the adminAPI to force the quorum and reactive your cluster in one single command !

Bonus

If you want to know if your MySQL server is part of the primary partition (the one having the majority), you can run this command:

mysql> SELECT IF( MEMBER_STATE='ONLINE' AND ((
SELECT COUNT() FROM performance_schema.replication_group_members
WHERE MEMBER_STATE NOT IN ('ONLINE', 'RECOVERING')) >=
((SELECT COUNT()
FROM performance_schema.replication_group_members)/2) = 0), 'YES', 'NO' )
in primary partition
FROM performance_schema.replication_group_members
JOIN performance_schema.replication_group_member_stats
USING(member_id) where member_id=@@global.server_uuid;
+----------------------+
| in primary partition |
+----------------------+
| NO |
+----------------------+

Or by using this addition to sys schema: addition_to_sys_GR.sql

SQL select gr_member_in_primary_partition();
+----------------------------------+
| gr_member_in_primary_partition() |
+----------------------------------+
| YES |
+----------------------------------+
1 row in set (0.0288 sec)

MySQL InnoDB Cluster : avoid split-brain while forcing quorum

$
0
0

We saw yesterday that when an issue (like network splitting), it’s possible to remain with a partitioned cluster where none of the partition have quorum (majority of members). For more info read how to manage a split-brain situation.

If your read the previous article you notice the red warning about forcing the quorum. As an advice is never too much, let me write it down again here : “Be careful that the best practice is to shutdown the other nodes to avoid any kind of conflicts if they reappear during the process of forcing quorum“.

But if some network problem is happening it might not be possible to shutdown those other nodes. Would it be really bad ?

YES !

Split-Brain

Remember, we were in this situation:

We decided to force the quorum on one of the nodes (maybe the only one we could connect to):

But what could happen if while we do this, or just after, the network problem got resolved ?

In fact we will have that split-brain situation we would like to avoid as much as possible.

Details

So what happen ? And why ?

When we ran cluster.forceQuorumUsingPartitionOf('clusteradmin@mysql1'), this is what we could read in the MySQL error log of that server:

[Warning] [MY-011498] [Repl] Plugin group_replication reported: 
'The member has resumed contact with a majority of the members in the group.
Regular operation is restored and transactions are unblocked.'
[Warning] [MY-011499] [Repl] Plugin group_replication reported:
'Members removed from the group: mysql2:3306, mysql3:3306'

The node ejected the other nodes of the cluster and of course no decision was communicate to these servers are they were not reachable anyway.

Now when the network situation was solved, this is what we could read on mysql2:

[Warning] [MY-011494] [Repl] Plugin group_replication reported: 
'Member with address mysql3:3306 is reachable again.'
[Warning] [MY-011498] [Repl] Plugin group_replication reported: 'The
member has resumed contact with a majority of the members in the group.
Regular operation is restored and transactions are unblocked.'
[Warning] [MY-011499] [Repl] Plugin group_replication reported:
'Members removed from the group: mysql1:3306

Same on mysql3, this means these two nodes reached majority together and ejected mysql1 from “their” cluster.

On mysql1, we can see in performance_schema:

mysql> select * from performance_schema.replication_group_members\G
************************** 1. row **************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: fb819b30-5b90-11e9-bf8a-08002718d305
MEMBER_HOST: mysql4
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.16
1 row in set (0.0013 sec)

An on mysql2 and mysql3:

mysql> select * from performance_schema.replication_group_members\G
************************** 1. row **************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 4ff0a33f-5c49-11e9-abc9-08002718d305
MEMBER_HOST: mysql6
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.16
************************** 2. row **************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: f8ac8d14-5b90-11e9-a22a-08002718d305
MEMBER_HOST: mysql5
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.16

This is of course the worse situation that could happen when dealing with a cluster.

Solution

The solution is to prevent the nodes not being part of the the forced quorum partition to agree making their own group as they will have a majority.

This can be achieve by setting these variables on an majority of nodes (on two servers if your InnoDB Cluster is made of 3 nodes for example):

When I fixed again my cluster and all were again online, I changed these settings on mysql1 and mysql2:

set global group_replication_unreachable_majority_timeout=30;
set global group_replication_exit_state_action = 'ABORT_SERVER';

This means that if there a problem and the node is not able to join the majority after 30 seconds it will go in ERROR state and then shutdown `mysqld`.

Pay attention that the 30sec is only an example. The time should allow me to remove that timer on the node I want to use for forcing the quorum (mysql1 in the example) but also be sure that time is elapsed on some nodes I can’t access to be sure they removed themselves from the group (mysql2 in the example).

So, if we try again with our example, once the network problem is happening, after 30sec, we can see in mysql2‘s error log that is working as expected:

[ERROR] [MY-011711] [Repl] Plugin group_replication reported: 'This member could not reach 
a majority of the members for more than 30 seconds. The member will now leave
the group as instructed by the group_replication_unreachable_majority_timeout
option.'
[ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically
set into read only mode after an error was detected.'
[Warning] [MY-013373] [Repl] Plugin group_replication reported: 'Started
auto-rejoin procedure attempt 1 of 1'
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] Timeout while waiting for the group communication engine to exit!'
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] The member has failed to gracefully leave the group.'
[System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier'
executed'. Previous state master_host='', master_port= 0,
master_log_file='', master_log_pos= 798,
master_bind=''. New state master_host='', master_port= 0,
master_log_file='', master_log_pos= 4, master_bind=''.
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] Error connecting to the local group communication engine instance.'
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] The member was unable to join the group. Local port: 33061'
[Warning] [MY-013374] [Repl] Plugin group_replication reported:
'Timeout while waiting for a view change event during the auto-rejoin procedure'
[Warning] [MY-013375] [Repl] Plugin group_replication reported:
'Auto-rejoin procedure attempt 1 of 1 finished.
Member was not able to join the group.'
[ERROR] [MY-013173] [Repl] Plugin group_replication reported:
'The plugin encountered a critical error and will abort:
Could not rejoin the member to the group after 1 attempts'
[System] [MY-013172] [Server] Received SHUTDOWN from user .
Shutting down mysqld (Version: 8.0.16).
[Warning] [MY-010909] [Server] /usr/sbin/mysqld:
Forcing close of thread 10 user: 'clusteradmin'.
[Warning] [MY-010909] [Server] /usr/sbin/mysqld:
Forcing close of thread 35 user: 'root'.
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] The member is leaving a group without being on one.'
[System] [MY-010910] [Server] /usr/sbin/mysqld:
Shutdown complete (mysqld 8.0.16) MySQL Community Server - GPL.
[Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close
of thread 10 user: 'clusteradmin'.
[Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close
of thread 35 user: 'root'.
[ERROR] [MY-011735] [Repl] Plugin group_replication reported:
'[GCS] The member is leaving a group without being on one.'
[System] [MY-010910] [Server] /usr/sbin/mysqld:
Shutdown complete (mysqld 8.0.16) MySQL Community Server - GPL

And when the quorum has been forced on mysql1, as soon as the network issue is resolved, none will join the Group and the DBA will have to use the shell to perform cluster.rejoinInstance(instance) or restart mysqld on the instances that shutdown themselves.

Conclusion

So as you can see, by default MySQL InnoDB Cluster and Group Replication is very protective for split-brain situation. And it can even be enforced to avoid problem when human interaction is needed.

The rule of the thumb to avoid problem is to set group_replication_unreachable_majority_timeoutto something you can deal with and group_replication_exit_state_action to ABORT_SERVER on (total amount of members in the cluster /2 )+1 as integer 😉

If you have 3 nodes, on 2 then ! Of course it might be much simpler to set it on all nodes.

Be aware that if you don’t react in the time frame defined by group_replication_unreachable_majority_timeout, all your servers will shutdown and you will have to restart one.

MySQL 8.0.16: how to validate JSON values in NoSQL with check constraint

$
0
0

As you may have noticed, MySQL 8.0.16 has been released today !

One of the major long expected feature is the support of CHECK contraints .

My colleague, Dave Stokes, already posted an article explaining how this works.

In this post, I wanted to show how we could take advantage of this new feature to validate JSON values.

Let’s take the following example:

So we have a collection of documents representing rates from a user on some episodes. Now, I expect that the value for the rating should be between 0 and 20.

Currently I could enter whatever value, even characters…

To avoid characters, I can already create a virtual column as integer:

So now, only integer value for rating should be allowed:

Perfect, but can I enter any integer value ?

In fact yes of course ! And that’s where the new CHECK Constraints enter in action !

We need first to modify the current document having a value for the ratingattribute that won’t be valid for the new constraints.

And now we can test again:

Woohooo! Nice feature that also benefits to the MySQL Document Store !

For the curious that want to see how the table looks like in SQL definition:

Enjoy NoSQL with MySQL 8.0 Document Store #MySQL8isGreat.

What configuration settings did I change on my MySQL Server ?

$
0
0

This post is just a reminder on how to find which settings have been set on MySQL Server.

If you have modified some settings from a configuration file or during runtime (persisted or not), these two queries will show you what are the values and how they were set. Even if the value is the same as the default (COMPILED) in MySQL, if you have set it somewhere you will be able to see where you did it.

Global Variables

First, let’s list all the GLOBAL variables that we have configured in our server:

SELECT t1.VARIABLE_NAME, VARIABLE_VALUE, VARIABLE_SOURCE
FROM performance_schema.variables_info t1
JOIN performance_schema.global_variables t2
ON t2.VARIABLE_NAME=t1.VARIABLE_NAME
WHERE t1.VARIABLE_SOURCE != 'COMPILED';

This is an example of the output:

Session Variables

And now the same query for the session variables:

SELECT t1.VARIABLE_NAME, VARIABLE_VALUE, VARIABLE_SOURCE
FROM performance_schema.variables_info t1
JOIN performance_schema.session_variables t2
ON t2.VARIABLE_NAME=t1.VARIABLE_NAME
WHERE t1.VARIABLE_SOURCE = 'DYNAMIC';

And an example:

You can also find some more info in this previous post. If you are interested in default values of different MySQL version, I also invite you to visit Tomita Mashiro‘s online tool : https://tmtm.github.io/mysql-params/

In case you submit bugs to MySQL, I invite you to also add the output of these two queries.

Using the new MySQL Shell Reporting Framework to monitor InnoDB Cluster

$
0
0

With MySQL Shell 8.0.16, a new very interesting feature was released: the Reporting Framework.

Jesper already blogged about it and I recommend you to read his articles if you are interested in writing your own report:

  • https://mysql.wisborg.dk/2019/04/26/mysql-shell-8-0-16-built-in-reports/
  • https://mysql.wisborg.dk/2019/04/27/mysql-shell-8-0-16-user-defined-reports/

I this post, I will show you one user-defined report that can be used to monitor your MySQL InnoDB Cluster / Group Replication.

Preparation

Before being able to use the report, you need to download 2 files. The first one is the addition in systhat I often use to monitor MySQL InnoDB Cluster:

And the second one is the report:

Once downloaded, you can unzip them and install them:

On your Primary-Master run:

mysqlsh --sql clusteradmin@mysql1 < addition_to_sys_GR.sql

Now install the report on your MySQL Shell client’s machine:

$ mdkir -p ~/.mysqlsh/init.d
mv gr_info.py ~/.mysqlsh/init.d

Usage

Once installed, you just need to relaunch the Shell and you are ready to call the new report using the \show command:

Now let’s see the report in action when I block all writes on mysql2 with a FTWRL and call the report with \watch:

Conclusion

Yet another nice addition to MySQL Shell. With this report you can see which member still has quorum, how many transactions each nodes have to apply, …

Don’t hesitate to also share your reports too !


MySQL InnoDB Cluster : Recovery Process Monitoring with the MySQL Shell Reporting Framework

$
0
0

As explained in this previous post, it’s now (since 8.0.16) possible to use the MySQL Shell Reporting Framework to monitor MySQL InnoDB Cluster.

Additionally, when a member of the MySQL InnoDB Cluster’s Group leaves the group for any reason, or when a new node is added from a backup, this member needs to sync up with the other nodes of the cluster. This process is called the Distributed Recovery.

During the Distributed Recovery, the joiner receives from a donor all the missing transactions using asynchronous replication on a dedicated channel.

It’s of course also possible to monitor the progress of this recovery process by calculating how many transactions have still to be applied locally.

You can download the report file and uncompress it in ~/.mysqlsh/init.d/:

This report must be run only when connected to the joiner:

We can see that as soon as we reach 0, the node finishes the Recovery Process and joins the Group.

the MySQL Team in Austin, TX

$
0
0

At the end of the month, some engineers of the MySQL Team will be present in Austin, TX !

We will attend the first edition of Percona Live USA in Texas.

During that show, you will have the chance to meet key engineers, product managers, as well as Dave and myself.

Let me present you the Team that will be present during the conference:

The week will start with the MySQL InnoDB Cluster full day tutorial by Kenny and myself. This tutorial is a full hands-on tutorial where we will start by migrating a classical asynchronous master-replicas topology to a new MySQL InnoDB Cluster. We will then experience several labs were we will see how to maintain our cluster.

If you registerd for our tutorial, please come with a laptop able to run 3 VirtualBox VMs that you can install from a USB stick. So please make free some disk space and install the latest Virtualbox on your system.

This year, I will also have to honor to present the State of the Dolphin, during the keynote.

During the conference, you will be able to learn a lot from our team on many different topics. Here is the list of the session by our engineers:

We will also be present in the expo hall where we will welcome you at our booth. We will show you demos of MySQL InnoDB Cluster and MySQL 8.0 Document Store, where NoSQL and SQL lives in peace together ! Don’t hesitate to visit us during the show.

We will also be present during the Community Dinner and will enjoy hear your thoughts about MySQL !

See you in almost 2 weeks in Texas !

MySQL Group Replication: what are those UDFs ?

$
0
0

To operate more easily a MySQL Group Replication (InnoDB Cluster), the Group Replication plugins provides some UDFs.

If you have read the recent article from Tiago Vale about the Group Replication Communication Protocol, you may have heard about two new UDFs allowing to get or set  the communication protocol.

So what are all the UDFs provided with the Group Replication and what’s their purpose ?

SELECT UDF_NAME FROM performance_schema.user_defined_functions 
WHERE UDF_NAME LIKE 'group_repl%';
+-------------------------------------------------+
 | UDF_NAME                                        |
 +-------------------------------------------------+
 | group_replication_get_communication_protocol    |
 | group_replication_get_write_concurrency         |
 | group_replication_set_as_primary                |
 | group_replication_set_communication_protocol    |
 | group_replication_set_write_concurrency         |
 | group_replication_switch_to_multi_primary_mode  |
 | group_replication_switch_to_single_primary_mode |
 +-------------------------------------------------+

Some of these UDFs can be called by a cluster method when using the Shell:

  • cluster.setPrimaryInstance()
  • cluster.switchToMultiPrimaryMode()
  • cluster.switchToSinglePrimaryMode()

In case you mix UDFs managing the topology with the Shell’s method, you might encounter a mismatch that will require a rescan() :

Perl & MySQL 8.0

$
0
0

If you just migrated to MySQL 8.0, you may have seen that the default authentication plugin has been changed to a more secure one: caching_sha2_password and I’ve already written some articles about it.

Now let’s discover how Perl users can deal with MySQL 8.0.

The driver to use MySQL with Perl is perl-DBD-MySQL. MySQL 8.0 is supported but the new authentication plugin might not be. This depends of the mysql library linked during compilation of the module.

problem connecting to MySQL 8.0

The error you may encounter is the following:

DBI connect('host=localhost','fred',...) failed: Authentication plugin
'caching_sha2_password' cannot be loaded:
/usr/lib64/mysql/plugin/caching_sha2_password.so:
cannot open shared object file: No such file or directory at ./perl_example.pl line 8.

So if you encounter this problem when using perl-DBD-MySQL to connect to MySQL 8.0, you may have a driver that doesn’t support yet the new plugin. If this is the case, the fastest, easiest but not safest solution is to use the older authentication plugin: mysql_native_password.

mysql_native_password

To verify which authentication plugin the user uses, you can run the following query:

mysql> select user, plugin from mysql.user where user='fred';
+------+-----------------------+
| user | plugin |
+------+-----------------------+
| fred | caching_sha2_password |
+------+-----------------------+

You can see that this is the new one used in MySQL 8.0. Let’s change it:

mysql> alter user 'fred' identified 
with 'mysql_native_password' by 'mysecurepasswd';

Pay attention that you need to set the password.

Please do not modify manually the mysq.user table with any update statement.

caching_sha2_password

If you want to use the new safer authentication mechanism, you need to verify if your perl-DBD-MySQL module is linked with a library that supports it:

$ ldd /usr/lib64/perl5/vendor_perl/auto/DBD/mysql/mysql.so | grep  'mysql\|maria'
libmysqlclient.so.18 => /usr/lib64/mysql/libmysqlclient.so.18 (0x00007f0f632ee000)

This version (libmysqlclient.so.18) doesn’t support the new authentication plugin. You need to have at least libmysqlclient.so.21 or libmariadb.so.3.

By default on CentOS/RHEL/OL 7, perl-DBD-MySQL is linked with an old version of mariadb-libs (5.5) or using the mysql-community-libs-compat (especially if you upgraded to 8.0.x).

In latest Fedora, this is not the case:

$ ldd /usr/lib64/perl5/vendor_perl/auto/DBD/mysql/mysql.so | grep  'mysql\|maria'
libmariadb.so.3 => /lib64/libmariadb.so.3 (0x00007fecbc8b6000)

This library is installed by mariadb-connector-c-3.0.10 that supports MySQL 8.0’s new authentication mechanism too.

In case you want to use caching_sha2_password anyway with CentOS/RHEL/OL 7.x, I’ve made this rpm that is compiled with the new MySQL library:

$ ldd /usr/lib64/perl5/vendor_perl/auto/DBD/mysql/mysql.so | grep 'mysql\|maria'
libmysqlclient.so.21 => /usr/lib64/mysql/libmysqlclient.so.21 (0x00007f0b045fb000)

$ rpm -qf /usr/lib64/mysql/libmysqlclient.so.21
mysql-community-libs-8.0.16-1.el7.x86_64

conclusion

MySQL 8.0 becomes more and more popular, even other connectors than the native ones are now supporting it. Of course if you want to use it one system not yet using the latest releases, you need to have recent libraries but this is not very complicated.

MySQL: CPU information from SQL

$
0
0

Do you know that it’s possible to get information from the CPUs of your MySQL Server from SQL ?

If you enable the status for the  INNODB_METRICS table in INFORMATION_SCHEMA, you will be able to query CPU information.

First, check if those status are enabled:

MySQL> SELECT name, subsystem, status 
FROM INFORMATION_SCHEMA.INNODB_METRICS where NAME like 'cpu%';
+---------------+-----------+----------+
| name | subsystem | status |
+---------------+-----------+----------+
| cpu_utime_abs | cpu | disabled |
| cpu_stime_abs | cpu | disabled |
| cpu_utime_pct | cpu | disabled |
| cpu_stime_pct | cpu | disabled |
| cpu_n | cpu | disabled |
+---------------+-----------+----------+
5 rows in set (0.00 sec)

By default, they are not, let’s enable them:

MySQL> SET GLOBAL innodb_monitor_enable='cpu%';
Query OK, 0 rows affected (0.00 sec)

MySQL> SELECT name, subsystem, status 
FROM INFORMATION_SCHEMA.INNODB_METRICS where NAME like 'cpu%';
 +---------------+-----------+---------+
 | name          | subsystem | status  |
 +---------------+-----------+---------+
 | cpu_utime_abs | cpu       | enabled |
 | cpu_stime_abs | cpu       | enabled |
 | cpu_utime_pct | cpu       | enabled |
 | cpu_stime_pct | cpu       | enabled |
 | cpu_n         | cpu       | enabled |
 +---------------+-----------+---------+

Now, it’s very easy to see the content of these tables :

MySQL> select * from information_schema.INNODB_METRICS where name like 'cpu%'\G

This new feature can be very useful in cloud environment where this information is not always available or in environments where the DBA doesn’t have system access (I hope this is not a too popular environment).

Viewing all 411 articles
Browse latest View live