Quantcast
Channel: lefred blog: tribulations of a MySQL Evangelist
Viewing all 411 articles
Browse latest View live

Let’s talk together at BigDataLDN

$
0
0

The event takes place on 3-4 November 2016 at Olympia Conference Centre, London and will provide you with the tools to deliver your most effective data-driven strategy.

We will be exhibiting on stand 514 and we would be delighted
to speak with you over the 2 days.

Come over and speak to us about how your business can benefit from our open source database solutions. We will be offering demos of our products and will show you how your business can benefit from them.

GET YOUR FREE TICKET. REGISTER HERE.


My slides of devops days Ghent, Belgium are now online

$
0
0

Today I delivered a session related on what is MySQL implementing to take the make the devops life easier.

You can find the slides below:

Group Replication is GA with MySQL 5.7.17 – comparison with Galera

$
0
0

It’s a wonderful news, we have released MySQL 5.7.17 with Group Replication Plugin (GA quality).

From the definition, Group Replication is a multi-master update everywhere replication plugin for MySQL with built-in conflict detection and resolution, automatic distributed recovery, and group membership.

So we can indeed compare this solution with Galera from Codership which is a Replication Plugin implementing the WSREP API. WSREP, Write Set Replication,  extends the replication API to provide all the information and hooks required for true multi-master, “virtually synchronous” replication.

With Group Replication, MySQL implemented all this information in the plugin itself. Our engineers leveraged existing standard MySQL infrastructure (GTIDs, Multi-Source Replication, Multi-threaded Slave Applier, Binary Logs,…) and prepared InnoDB since several releases to provide all the necessary features like High Priority Transaction in InnoDB since 5.7.6 for example.

This means that Group Replication is based on well known and trusted components and makes the integration and the adoption an easier process.

Both solutions are based on Replicated Database State Machine theory.

What are the similarities between both solutions ?

MySQL Group Replication and Galera use write sets. A write set is a set of globally unique identifiers of each
logical item changed by the transaction when it executed (item may be a row, a table, a metadata object, …).

So, Group Replication and Galera use ROW binary log events,  and together with the transaction data, its writesets are streamed synchronously from the server that received the write (Master for that specific transaction) to the other members/nodes in the cluster.

Then they will certify the writeset (transaction) locally and asynchronously queue the accepted changes to be applied.
Then both solutions will make use of the write sets to check
for conflicts between concurrent transactions executing on
different replicas. This procedure is named certification. So they will certify the write set locally and asynchronously queue the accepted changes to be applied.

Both implementations use a group communication engine that manages quorums, membership, message passing, …

So what is different then ?

The biggest difference is that Group Replication (GR) is a plugin for MySQL, made by MySQL, packaged and distributed with MySQL by default. Also, GR is available and supported on all MySQL Platforms: Linux, Windows, Solaris, OSX, FreeBSD.

As said before, GR also uses all the same infrastructure that people are used to (binlogs, GTIDs, …). In addition to familiarity and trust, this makes it much easier to integrate a Group Replication cluster into more complex topologies where different asynchronous master/slaves are also involved.

There are many implementation differences. I’ll list them in those categories:

  1. Group Communication
  2. InnoDB
  3. Binary Log & Snapshot
  4. GTID, Master-Master & Master-Slaves
  5. Monitoring

Group Communication

Galera is using a proprietary group communication system layer, which implements a virtual synchrony QoS which is based on the Totem Single-ring Ordering protocol. MySQL Group Replication use a Group Communication System (GCS) based on a variant of the popular Paxos algorithm.

This allows GR to achieve much more optimal network performance, thus greatly reducing the overall latency within the distributed system (more information about this in Vitor’s blog post). In fact the more nodes you add (currently GR supports up to 9 nodes per group), more the commit time will increase in Galera where it will stay almost stable with GR. This is due to GR using a peer-to-peer style communication versus Galera’s token ring.

InnoDB

Compared to Galera that needs to patch MySQL and add an extra layer to be able to kill a local transaction when there are certification conflicts, Group Replication uses the High Priority Transactions in InnoDB, which allows Group Replication to ensure that conflicts are detected and handled properly.

Binary Log

Even if it requires binlog_format=ROW, Galera doesn’t need to have the binary logs enabled. It’s anyway recommended to enable them for point-in-time recovery, asynchronous replication to a slave out of the cluster or for forensic purpose. So Galera doesn’t use the binary log to perform the incremental synchronization between the nodes.

Galera uses an extra file called gcache (Galera Cache). This file was not resilient since the last Galera release (3.19, and it’s not guaranteed). The data stored inside of this file can’t be used for anything else than IST (Incremental State Transfer).

In Group Replication, we keep using the binary log files for that purpose. So if a node was out for a short period, it will perform the synchronization from the binary logs of the node that has been elected as donor. This is called IST in Galera (from the gcache when data is available) and Automated Distributed Recovery in GR.

Basing our solution on binary logs allows us to have the data safely persisted (flushed and sync’d). Also this is a well known format and as mentioned above, binary logs server many purposes too (distributed recovery, asynchronous replication, Point-in-time recovery, streaming or piping to other system like Kafka… and can even be used to perform schema changes!).

The Galera Cache file is used to store the writesets in circular buffer style and has its size pre-defined. So it might happen that IST is impossible and that a full state transfer is required (SST).

And this is maybe one of the advantage of Galera for people having a lot of network or hardware problems: the full data provisioning. It’s true that with Galera, when a new node is added to the cluster, it’s possible to not prepare the new node in advance. This is very convenient for newbies. We understand the need for a better solutions. Currently this process is pretty much the same as provisioning a slave when using regular replication.

However, every Galera experienced DBAs can also tell you that they try to avoid SST as much as possible.

GTID, Master-Master, Master-Slave

Like Galera, GR has one attributed UUID for the cluster. The difference with Galera is that even if all nodes in the same Group share the same UUID,  in GR they have their own sequence number range (defined by group_replication_gtid_assignment_block_size).

And like Galera if your workload allows it (more to come in a future post), you can use a multi-master cluster and write on all the nodes at the same time. But as this is some how synchronized, that won’t scale up writes anyway. So, even if it’s not really advertised in Galera, with Group Replication we recommend to use a single-master at the time to reduce the probability of conflicts.

Writing on one single master also allows to avoid probable issues when dealing with schema changes and modifying data on another node at the same time.

This is why by default, in MySQL Group Replication, your cluster runs in Single Primary Mode (controller by group_replication_single_primary_mode). This means the Group itself will automatically elect a leader and keep managing this task when the group changes (in case of failure of the leader). Don’t forget that Group Replication is first a High Availability solution.

Of course, even when using te cluster in Single Primary Mode, the limitations or recommendations related to Group Replication still apply  (like disabling binlog checksum, using only InnoDB tables, let the Group manage the auto_increment related variables, …), but there are some less.

Monitoring

Unlike Galera that uses only status variables (if I remember correctly), Group Replication uses Performance Schema to expose information. The Galera fork present in Percona XtraDB Cluster also uses performance_schema in its 5.7 version.

For example, in Galera it’s not easy to find from any node which others nodes are in the cluster and what’s their status. With Group Replication we expose all that in performance_schema:

select * from replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: e8fe7c39-ada4-11e6-8891-08002718d305
MEMBER_HOST: mysql3
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: e920a7cf-ada4-11e6-8971-08002718d305
MEMBER_HOST: mysql2
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: e92186b1-ada4-11e6-ba00-08002718d305
MEMBER_HOST: mysql1
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

As you can see, the Performance_Schema tables offer an easy and intuitive way to get information and stats on an individual node and the group as a whole.

If you are using a solution that requires an health-check to monitor the nodes and decide of the routing from the application to the right node(s), you can also base your script on sys schema that provides views with all the information you need to make the right routing decision.

Conclusion

So, it’s really true that Galera benefits from many years of experience and has still many more features, some major like the arbitrator[1], or minor like node weight, sync wait, segments, … but Group Replication is a solid contender, certainly if you are looking for great performance.

If you think that you are missing something to adopt this technology, just drop me a comment explaining your need. Also don’t hesitate to comment this blog post if I missed something or if you don’t agree on some points, I can always review my thoughts.

 

[1] I was never a big fan of the use of an arbitrator in Galera, as all data need to reach the node anyway, for the storage price those days, I consider that it’s much safer to have a real cluster node where data is also replicated. 3 data copies are always better than 2 😉

How to migrate from Galera Cluster to MySQL Group Replication

$
0
0

In this article, I will show you how it’s possible to perform an online migration from a 3 members Galera cluster setup (in this case I’m using PXC 5.7.14) to a 3 members MySQL Group Replication cluster setup (MySQL Community 5.7.17).

Don’t forget that before adopting Group Replication as database backend, you should validate that your application do match GR requirements and limitations. When this is validated, you can start !

So first, let’s have a look at the current situation:

 

We have an application (sysbench 0.5), reading and writing to a Galera Cluster (Percona XtraDB Cluster 5.7.14) via ProxySQL. We write on all the nodes (Multi-Master) and we will do the same on our MySQL Group Replication Cluster, we will use a Multi-Primary Group.

This is the command used to simulate our application:

while true; do sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua \
     --mysql-host=127.0.0.1 --mysql-port=6033 --mysql-password=fred \
     --mysql-table-engine=innodb --mysql-user=fred --max-requests=0 \
     --tx-rate=20 --num-threads=2  --report-interval=1 run ; done;

And this an overview of the machines used:

Hostname OS Software IP Server_id
mysql1 CentOS 7 PXC 5.7.14 192.168.90.10 1
mysql2 CentOS 7 PXC 5.7.14 192.168.90.11 2
mysql3 CentOS 7 PXC 5.7.14 192.168.90.12 3
app CentOS 7 ProxySQL
sysbench 0.5
192.168.90.13 n/a

So the goal will be to replace all those PXC nodes by MySQL 5.7.17 with Group Replication one by one and avoid downtime.

For those familiar with ProxySQL this is how we see the Galera nodes in the proxy:

ProxySQL Admin> select hostname, status, hostgroup_id from runtime_mysql_servers;
+---------------+--------+--------------+
| hostname      | status | hostgroup_id |
+---------------+--------+--------------+
| 192.168.90.12 | ONLINE | 1            |
| 192.168.90.10 | ONLINE | 1            |
| 192.168.90.11 | ONLINE | 1            |
+---------------+--------+--------------+

For the other ones, you can find more info in the previous post: HA with MySQL Group Replication and ProxySQL.

To be able to proceed as we planned, we need to have binary logs enabled on every PXC nodes and also use MySQL GTIDs.
So in my.cnf you must have:

enforce_gtid_consistency = on
gtid_mode  = on
log_bin 
log_slave_updates

First Step: remove one node and migrate it to MySQL 5.7.17

Our first step in this section will be to stop mysqld and remove the PXC packages on mysql3:

[root@mysql3]# systemctl stop mysql

ProxySQL Admin> select hostname, status from runtime_mysql_servers;
+---------------+---------+
| hostname      | status  |
+---------------+---------+
| 192.168.90.11 | ONLINE  |
| 192.168.90.12 | SHUNNED |
| 192.168.90.10 | ONLINE  |
+---------------+---------+

Our application is of course still running (it might of course be disconnected), so in this case sysbench runs in a loop.

As all our nodes are running CentOS 7, we will use the mysql57 community repo for el7.

[root@mysql3 ~]# yum install http://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm

Now we can change the packages:

[root@mysql3 ~]# yum -y swap Percona-XtraDB-Cluster* mysql-community-server mysql-community-libs-compat
...
=========================================================================================================
 Package                                   Arch     Version              Repository                 Size
=========================================================================================================
Installing:
 mysql-community-libs-compat               x86_64   5.7.17-1.el7         mysql57-community         2.0 M
 mysql-community-server                    x86_64   5.7.17-1.el7         mysql57-community         162 M
Removing:
 Percona-XtraDB-Cluster-57                 x86_64   5.7.14-26.17.1.el7   @percona-release-x86_64   0.0  
 Percona-XtraDB-Cluster-client-57          x86_64   5.7.14-26.17.1.el7   @percona-release-x86_64    37 M
 Percona-XtraDB-Cluster-server-57          x86_64   5.7.14-26.17.1.el7   @percona-release-x86_64   227 M
 Percona-XtraDB-Cluster-shared-57          x86_64   5.7.14-26.17.1.el7   @percona-release-x86_64   3.7 M
 Percona-XtraDB-Cluster-shared-compat-57   x86_64   5.7.14-26.17.1.el7   @percona-release-x86_64   6.7 M
Installing for dependencies:
 mysql-community-client                    x86_64   5.7.17-1.el7         mysql57-community          24 M
 mysql-community-common                    x86_64   5.7.17-1.el7         mysql57-community         271 k
 mysql-community-libs                      x86_64   5.7.17-1.el7         mysql57-community         2.1 M

Transaction Summary
=========================================================================================================
Install  2 Packages (+3 Dependent packages)
Remove   5 Packages

After that step, it’s time to modify my.cnf and comment all wsrep and pxc related variables and add some extra that are mandatory:

binlog_checksum = none
master_info_repository = TABLE
relay_log_info_repository = TABLE
transaction_write_set_extraction = XXHASH64
loose-group_replication_group_name="afb80f36-2bff-11e6-84e0-0800277dd3bf"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "192.168.90.12:3406"
loose-group_replication_group_seeds= "192.168.90.10:3406,192.168.90.11:3406"
loose-group_replication_bootstrap_group= off
loose-group_replication_single_primary_mode= off

Then we move that server to another hostgroup in ProxySQL:

ProxySQL Admin> update mysql_servers set hostgroup_id =2 where hostname ="192.168.90.12";
ProxySQL Admin> load mysql servers to runtime;
ProxySQL Admin> select hostname, status, hostgroup_id from runtime_mysql_servers;
+---------------+---------+--------------+
| hostname      | status  | hostgroup_id |
+---------------+---------+--------------+
| 192.168.90.11 | ONLINE  | 1            |
| 192.168.90.10 | ONLINE  | 1            |
| 192.168.90.12 | SHUNNED | 2            |
+---------------+---------+--------------+

It’s time now to start mysqld:

[root@mysql3 ~]# systemctl start mysqld
[root@mysql3 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.17-log MySQL Community Server (GPL)
...

Step 2: create Group Replication Cluster of 1 node

Now we need to bootstrap our group.

This is a very easy step:

mysql3> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
mysql3> SET GLOBAL group_replication_bootstrap_group=ON;
mysql3> START GROUP_REPLICATION;
mysql3> SET GLOBAL group_replication_bootstrap_group=OFF;

Now the Group is started:

mysql3> select * from performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 9e8416d7-b1c6-11e6-bc10-08002718d305
 MEMBER_HOST: mysql3.localdomain
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

It’s not needed now, but it’s the best time to also setup the credentials for the recovery process. You might forget it.

mysql1> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='password' 
             FOR CHANNEL 'group_replication_recovery';

This user is not yet created but we will created directly during the next step (and it will be replicated to all nodes).

Step 3: make this MySQL 5.7.17 replicate from PXC

So now we need to create a replication user on the Galera cluster that we will use for this new MySQL 5.7 server (and later for the Group Replication’s recovery process):

mysql1> CREATE USER 'repl'@'192.168.90.%' IDENTIFIED BY 'password';
mysql1> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.90.%';

And we can start asynchronous replication from the Galera Cluster to this new MySQL server:

mysql3> CHANGE MASTER TO MASTER_HOST="192.168.90.10", MASTER_USER="repl", 
        MASTER_PASSWORD='password', MASTER_AUTO_POSITION=1;
mysql3> START SLAVE;

Now we have the following environment:

 

Step 4: migrate an extra node to the Group

Now we will almost do the same with mysql2:

  1. stop mysql
  2. install mysql community repository
  3. swap the packages
  4. modify my.cnf
  5. put mysql2 in hostgroup_id 2 in ProxySQL
  6. start mysqld
  7. join the group

Let’s skip points 1 to 4.

Unlike Galera, it’s mandatory with MySQL Group Replication that all the nodes have a unique server_id. We must then be careful, in this case we will set it to 2.

Don’t forget to also swap the addresses for mysql3 and mysql2 between group_replication_local_address and group_replication_group_seeds:

loose-group_replication_local_address= "192.168.90.11:3406"
loose-group_replication_group_seeds= "192.168.90.10:3406,192.168.90.12:3406"

Put mysql2 in hostgroup_id 2 in ProxySQL, so you have:

ProxySQL Admin> select hostname, status, hostgroup_id from runtime_mysql_servers;
+---------------+---------+--------------+
| hostname      | status  | hostgroup_id |
+---------------+---------+--------------+
| 192.168.90.10 | ONLINE  | 1            |
| 192.168.90.12 | ONLINE  | 2            |
| 192.168.90.11 | SHUNNED | 2            |
+---------------+---------+--------------+

Start mysqld and it’s time to join the group (point 7) !

mysql2> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
mysql2> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='password' 
        FOR CHANNEL 'group_replication_recovery';
mysql2> START GROUP_REPLICATION;

So this is the new situation:

mysql3> select * from performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 5221ffcf-c1e0-11e6-b1f5-08002718d305
 MEMBER_HOST: pxc2
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 5a2d38db-c1e0-11e6-8bf6-08002718d305
 MEMBER_HOST: pxc3
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

Step 5: move the application to our new Cluster

It’s time now to let the application connect to our new MySQL Group Replication Cluster. In ProxySQL, we change the hostgroup_id for mysql2 and mysql3 to 1 and to 2 for mysql1, then we load it runtime and we stop mysql on mysql1:

ProxySQL Admin> select hostname, status, hostgroup_id from runtime_mysql_servers;
+---------------+--------+--------------+
| hostname      | status | hostgroup_id |
+---------------+--------+--------------+
| 192.168.90.10 | ONLINE | 1            |
| 192.168.90.12 | ONLINE | 2            |
| 192.168.90.11 | ONLINE | 2            |
+---------------+--------+--------------+
ProxySQL Admin> update mysql_servers set hostgroup_id =2 where hostname ="192.168.90.10";
ProxySQL Admin> update mysql_servers set hostgroup_id =1 where hostname ="192.168.90.11";
ProxySQL Admin> update mysql_servers set hostgroup_id =1 where hostname ="192.168.90.12";

ProxySQL Admin> load mysql servers to runtime;

ProxySQL Admin> select hostname, status, hostgroup_id from runtime_mysql_servers;
+---------------+--------+--------------+
| hostname      | status | hostgroup_id |
+---------------+--------+--------------+
| 192.168.90.12 | ONLINE | 1            |
| 192.168.90.11 | ONLINE | 1            |
| 192.168.90.10 | ONLINE | 2            |
+---------------+--------+--------------+

In this case as we are using ProxySQL, as soon as mysql1 (192.168.90.10) changes group, all the connection to it are killed and they will reconnect to the new nodes that are now part of the MySQL Group Replication.

[root@mysql1 ~]# systemctl stop mysql

To finish, we have two options, or we configure the remaining PXC node to be slave for a while, so we can still decide to rollback the migration (I would then consider to add an extra node to the current MySQL Group Replication Cluster, as with 2 nodes, the cluster is not tolerant to any failure). Or we can directly migrate the last Galera node to Group Replication.

Conclusion

As you could see, migrate your current Galera environment to MySQL Group Replication is not complicated and can be done with really minimal impact.

Don’t hesitate to leave your comments or questions like usual.

 

Note:

It’s also possible to swap Setp 2 and Step3, this means that the asynchronous replication is started before the bootstrap of the Group Replication. In that case, it might happen that the asynchronous replication fails while starting Group Replication, as the replication recovery is started and therefore no transaction can be executed.

You can see the following in SHOW SLAVE STATUS:

Last_SQL_Error: Error in Xid_log_event: Commit could not be completed, 
'Error on observer while running replication hook 'before_commit'.'

The error log also gives you information about it:

[Note] Plugin group_replication reported: 'Starting group replication recovery 
       with view_id 14817109352506883:1'
[Note] Plugin group_replication reported: 'Only one server alive. 
       Declaring this server as online within the replication group'
[ERROR] Plugin group_replication reported: 'Transaction cannot be executed while Group Replication 
        is recovering. Try again when the server is ONLINE.'
[ERROR] Run function 'before_commit' in plugin 'group_replication' failed
[ERROR] Slave SQL for channel '': Error in Xid_log_event: Commit could not be completed, 
        'Error on observer while running replication hook 'before_commit'.', Error_code: 3100
[Warning] Slave: Error on observer while running replication hook 'before_commit'. Error_code: 3100
[ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart 
        the slave SQL thread with "SLAVE START". We stopped at log 'pxc1-bin.000009' position 76100764
[Note] Plugin group_replication reported: 'This server was declared online within the replication group'

Restarting again replication solves the problem:

mysql3> STOP SLAVE;
mysql3> START SLAVE;

 

MySQL Group Replication and table design

$
0
0

Today’s article is about the first two restrictions in the requirements page of the manual:

  • InnoDB Storage Engine: data must be stored in the InnoDB transactional storage engine.
  • Primary Keys: every table that is to be replicated by the group must have an explicit primary key defined.

So the first requirement is easy to check by a simple query that list all the non InnoDB tables:

SELECT table_schema, table_name, engine, table_rows, 
       (index_length+data_length)/1024/1024 AS sizeMB 
FROM information_schema.tables 
WHERE engine != 'innodb' 
  AND table_schema NOT IN 
    ('information_schema', 'mysql', 'performance_schema');

The second one is a bit more tricky. Let me show you first how Group Replication behaves:

Case 1: no keys

Let’s create a table with no Primary Key (neither any other keys) and then let’s insert one record:

mysql> create table test_tbl_nopk (id int, name varchar(10));
mysql> insert into test_tbl_nopk values (1,'lefred');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.

And in the error log we can see:

[ERROR] Plugin group_replication reported: 'Table test_tbl_nopk does not have any PRIMARY KEY. 
             This is not compatible with Group Replication'

So far, so good as it’s what we were expecting, right ?

Case 2: no PK, but NOT NULL UNIQUE KEY

Now, if you know InnoDB, when there is no PK defined, InnoDB will use the first NOT NULL UNIQUE KEY as PK. How will Group Replication handle that ?
Let’s verify:

mysql> create table test_tbl_nopk_uniq_notnull (id int not null unique key, name varchar(10));
mysql> insert into test_tbl_nopk_uniq_notnull values (1,'lefred');
Query OK, 1 row affected (0.01 sec)

Excellent, so Group Replication behaves like InnoDB and allows NOT NULL UNIQUE KEYS.

Case 3: no PK, but NULL UNIQUE KEY

Just to verify, let’s try with a UNIQUE KEY that can be NULL too:

mysql> create table test_tbl_nopk_uniq_null (id int unique key, name varchar(10));
mysql> insert into test_tbl_nopk_uniq_null values (1,'lefred');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.

This works then as expected. Why that ? Because, in InnoDB when no primary key is defined, the first unique not null key is used as seen above, but if none
is available, InnoDB will create a hidden primary key (stored on 6 bytes). The problem with such key is that this value is global to all InnoDB tables without
PK (this can of course cause contention), but in the case of Group Replication, there is no guarantee that this hidden PK will be the same on the other nodes that are members of the Group. That’s why this is not supported.

Conclusion

So if you want to know if you have tables without valid key design for Group Replication, please run the following statement:

SELECT tables.table_schema , tables.table_name , tables.engine 
FROM information_schema.tables 
LEFT JOIN ( 
   SELECT table_schema , table_name 
   FROM information_schema.statistics 
   GROUP BY table_schema, table_name, index_name HAVING 
     SUM( case when non_unique = 0 and nullable != 'YES' then 1 else 0 end ) = count(*) ) puks 
 ON tables.table_schema = puks.table_schema and tables.table_name = puks.table_name 
 WHERE puks.table_name is null 
   AND tables.table_type = 'BASE TABLE' AND Engine="InnoDB";

The query above is the courtesy of Roland Bouman.

MySQL Group Replication Limitations: savepoints

$
0
0

Today in our series of articles related to MySQL Group Replication’s limitations, let’s have a quick look at Savepoints.

The manual is clear about this: Transaction savepoints are not supported.

The first thing to check then is if the application that will use our MySQL Group Replication Cluster is currently using savepoints.

We have two ways to find this, the first is using STATUS variables:

mysql> show global status like '%save%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Com_release_savepoint      | 2     |
| Com_rollback_to_savepoint  | 0     |
| Com_savepoint              | 4     |
| Handler_savepoint          | 0     |
| Handler_savepoint_rollback | 0     |
+----------------------------+-------+

So in our example above, it seems that the application might need some changes to remove those savepoints.

The second option is to use performance_schema:

mysql> SELECT event_name, count_star, sum_errors 
       FROM performance_schema.events_statements_summary_global_by_event_name
       WHERE event_name LIKE '%save%' AND count_star > 0 ;
+-------------------------------------+------------+------------+
| event_name                          | count_star | sum_errors |
+-------------------------------------+------------+------------+
| statement/sql/savepoint             |          4 |          2 |
| statement/sql/release_savepoint     |          2 |          2 |
+-------------------------------------+------------+------------+

So now that we know how to verify if the application was using savepoint, let’s verify what’s happening when savepoints are used in MySQL Group Replication:

mysql> start transaction;
mysql> select now();
...
mysql> savepoint fred;
ERROR 1290 (HY000): The MySQL server is running with the --transaction-write-set-extraction!=OFF 
                    option so it cannot execute this statement

transaction_write_set_extraction defines the algorithm used to hash the extracted writes that were made during a transaction. If you are using Group Replication, the process of extracting those writes from a transaction is crucial for conflict detection on all nodes part of the Group, but this also prevents us to use transaction savepoints as this statement is not compatible with write set extraction.

 

Pre-Fosdem’17 MySQL Day

$
0
0

This year, I’ve the honor to organize just before the Fosdem MySQL & Friends Devroom an extra pre-Fosdem MySQL Day. This MySQL Day will take place the Friday just before Fosdem’s week-end.

During that day, we will highlight MySQL 8.0 new features but not only.

Oracle’s MySQL Community Team is sponsoring this event. Seating is limited, so please register.

The event is free and the location is the same as the very popular MySQL & Friends Community Dinner.

You can register now on eventbrite.

This is the agenda:

Start End Event Speaker Company Topic

Friday 3rd February

09:30 10:00 Welcome !
10:00 10:25 MySQL 8.0: Server Defaults – An overview of what settings have changed or are under consideration Morgan Tocker Oracle MySQL 8.0
10:30 10:55 MySQL 8.0: Unicode – What, why and how Bernt Marius Johnsen Oracle MySQL 8.0
11:00 11:25 MySQL 8.0: Common Table Expressions (CTEs) Øystein Grøvlen Oracle MySQL 8.0
11:30 11:55 Group Replication Kenny Gryp Percona Group Replication
12:05 12:30 How Booking.com avoids and deals with replication lag Jean-François Gagné Booking.com Replication
12:30 13:15 Lunch
13:15 14:10 MySQL for Beginners – Getting Basics Right Peter Zaitsev Percona MySQL
14:15 14:40 MySQL 8.0: Window functions – finally! Dag H. Wanvik Oracle MySQL 8.0
14:45 15:00 Coffee Break
15:00 15:25 Using Optimizer Hints to Improve MySQL Query Performance Øystein Grøvlen Oracle MySQL 8.0
15:30 15:45 Monitoring Booking.com without looking at MySQL Jean-François Gagné Booking.com Fun, Sport, Not-MySQL
15:50 16:15 What you wanted to know about your MySQL Server instance, but could not find using internal instrumentation only Sveta Smirnova Percona Troubleshooting
16:20 16:45 ProxySQL Use Case Scenarios Alkin Percona ProxySQL
16:50 17:15 MySQL 8.0: GIS – Are you ready ? Norvald H. Ryeng Oracle MysQL 8.0

MySQL Group Replication, Single-Primary or Multi-Primary, how to make the right decision ?

$
0
0

Today’s blog post is related again to MySQL Group Replication.

By default MySQL Group Replication runs in Single-Primary mode. And it’s the best option and the option you should use.

But sometimes it might happen that in very specific cases you would like to run you MGR Cluster in Multi-Primary mode: writing simultaneously on all the nodes member of the Group.

It’s of course feasible but you need to make some extra verification as not all workload are compatible with this behavior of the cluster.

Requirements

The requirements are the same as those for using MGR in Single-Primary mode:

  • InnoDB Storage Engine
  • Primary Keys
  • IPv4 Network
  • Binary Log Active
  • Slave Updates Logged
  • Binary Log Row Format
  • Global Transaction Identifiers On
  • Replication Information Repositories stored to tables
  • Transaction Write Set Extraction set to XXHASH64

You can find more details in the online manual.

Limitations

These are the MySQL Group Replication Limitations as in the manual:

  • Replication Event Checksums must be set to NONE
  • Gap Locks, so better to use READ-COMMITTED as tx_isolation
  • Table Locks and Named Locks are not supported
  • Savepoints are also not supported.
  • SERIALIZABLE Isolation Level is not supported.
  • Concurrent DDL vs DML/DDL Operations
  • Foreign Keys with Cascading Constraints

So from the list above, the limitations that we will affect Multi-Primary mode are the Concurrent DDLs/DML and the foreign keys.

Let’s have a more detail look at them.

Concurrent DDL vs DML/DDL Operaions

The manual says Concurrent DDL vs DML/DDL operations on the same object, executing at different servers, is not supported in multi-primary deployments. Conflicting data definition statements (DDL) executing on different servers are not detected. Concurrent data definition statements and data manipulation statements executing against the same object but on different servers is not supported.

So this is clear. The only thing we can do, is then be sure we don’t allow writes on the other nodes when we need to run a DDL. This can be done in your router/proxy solution and/or set the nodes in READ_ONLY.

This means that if your application performs DDL on it’s own (and not handled by a DBA), I would recommend you to not use Multi-Primary at all !

To verify if your application is running such statements, you can run the following query several times during the day and see how the values increase or not:

   SELECT event_name, count_star, sum_errors 
   FROM events_statements_summary_global_by_event_name 
   WHERE event_name  REGEXP '.*sql/(create|drop|alter).*' 
     AND event_name NOT REGEXP '.*user';

Foreign Keys with Cascading Constraints

Again, let’s see what the manual says about this limitation: Multi-primary mode groups do not fully support using foreign key constraints. Foreign key constraints that result in cascading operations executed by a multi-primary mode group have a risk of undetected conflicts. Therefore we recommend setting group_replication_enforce_update_everywhere_checks=ON on server instances used in multi-primary mode groups. Disabling group_replication_enforce_update_everywhere_checks and using foreign keys with cascading constraints requires extra care. In single-primary mode this is not a problem.

So let’s find if we have such design:

SELECT CONCAT(t1.table_name, '.', column_name) AS 'foreign key',     
     CONCAT(t1.referenced_table_name, '.', referenced_column_name) AS 'references',     
     t1.constraint_name AS 'constraint name', UPDATE_RULE, DELETE_RULE 
     FROM information_schema.key_column_usage as t1 
     JOIN information_schema.REFERENTIAL_CONSTRAINTS as t2 
     WHERE t2.CONSTRAINT_NAME = t1.constraint_name 
       AND t1.referenced_table_name IS NOT NULL 
       AND (DELETE_RULE = "CASCADE" OR UPDATE_RULE = "CASCADE");
+----------------------+---------------------+---------------------+-------------+-------------+
| foreign key          | references          | constraint name     | UPDATE_RULE | DELETE_RULE |
+----------------------+---------------------+---------------------+-------------+-------------+
| dept_emp.emp_no      | employees.emp_no    | dept_emp_ibfk_1     | RESTRICT    | CASCADE     |
| dept_emp.dept_no     | departments.dept_no | dept_emp_ibfk_2     | RESTRICT    | CASCADE     |
| dept_manager.emp_no  | employees.emp_no    | dept_manager_ibfk_1 | RESTRICT    | CASCADE     |
| dept_manager.dept_no | departments.dept_no | dept_manager_ibfk_2 | RESTRICT    | CASCADE     |
| salaries.emp_no      | employees.emp_no    | salaries_ibfk_1     | RESTRICT    | CASCADE     |
| titles.emp_no        | employees.emp_no    | titles_ibfk_1       | RESTRICT    | CASCADE     |
+----------------------+---------------------+---------------------+-------------+-------------+

So in our case above, we have a problem and it’s not recommended to use multi-primary.

Let me show you what kind or error you may have.

Case 1: default settings + group_replication_single_primary_mode = off

In that case, if we perform a DML on such table,… nothing happens ! No error as there is no conflict on my test machine without concurrent workload.
But this is not safe as it might happen, remember, this is not fully supported !

Case 2: group_replication_single_primary_mode = off + group_replication_enforce_update_everywhere_checks = 1

Now if we run a DML on such table, we have an error:

mysql> update employees.salaries set salary = 60118 where emp_no=10002 and salary<60117;
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.

and in the error log we can read:

[ERROR] Plugin group_replication reported: 'Table salaries has a foreign key with 'CASCADE' clause. 
        This is not compatible with Group Replication'

So be careful that by default you could get some issues as the check is disabled.

I also want to add that all the nodes in the Group must have that setting. I you try to start group replication on a node where you have a different value for
group_replication_enforce_update_everywhere_checks, then the node won’t be able to join and in the error log you will see:

[ERROR] Plugin group_replication reported: 'The member configuration is not compatible with the group configuration. 
        Variables such as single_primary_mode or enforce_update_everywhere_checks must have the same value 
        on every server in the group. 
        (member configuration option: [group_replication_enforce_update_everywhere_checks], group configuration option: []).

Is this enough to be sure that our cluster will run smoothly in Multi-Primary mode ? In fact no it isn’t !

Now we also try to reduce the risk of certification failure that might happen when writing on multiple nodes simultaneously.

Workload Check

Group Replication might be sensible when writing on multiple nodes (Multi-Primary mode) do the following workload:

  • Large transactions (they have the risk to be in conflict with a short one that they have to rollback too frequently)
  • Hotspots: rows that might be changed on all the nodes simultaneously

Large Transactions

With Performance_Schema, we have in MySQL everything we need to be able to identify large transactions. We will focus then on identifying :

  • the transactions with most statements (and most writes in particular)
  • the transactions with most rows affected
  • the largest statements by row affected

Before being able to verify all this on your current system that you want to migrate to Group Replication, we need to activate come consumers and instruments in Performance_Schema:

mysql> update performance_schema.setup_consumers 
 set enabled = 'yes' 
 where name like 'events_statement%' or name like 'events_transaction%';

mysql> update performance_schema.setup_instruments 
 set enabled = 'yes', timed = 'yes' 
 where name = 'transaction';

Now we should let the system run for some time and verify when we have enough data collected.

In the future some of the data we are collecting in this article will be available via sys.

Transactions with most statements

select t.thread_id, t.event_id, count(*) statement_count, 
       sum(s.rows_affected) rows_affected, 
       length(replace(group_concat(
         case when s.event_name = "statement/sql/update" then 1 
              when s.event_name = "statement/sql/insert" then 1 
              when s.event_name = "statement/sql/delete" then 1 
              else null end),',','')) 
         as "# write statements" 
from performance_schema.events_transactions_history_long t 
join performance_schema.events_statements_history_long s 
  on t.thread_id = s.thread_id and t.event_id = s.nesting_event_id 
group by t.thread_id, t.event_id order by rows_affected desc limit 10;

We can also see those statements has I illustrate it below:

mysql> set group_concat_max_len = 1000000;
mysql> select t.thread_id, t.event_id, count(*) statement_count, 
    ->        sum(s.rows_affected) rows_affected, 
    ->        group_concat(sql_text order by s.event_id separator '\n') statements 
    -> from performance_schema.events_transactions_history_long t 
    -> join performance_schema.events_statements_history_long s 
    ->   on t.thread_id = s.thread_id and t.event_id = s.nesting_event_id 
    -> group by t.thread_id, t.event_id order by statement_count desc limit 1\G
*************************** 1. row ***************************
      thread_id: 332
       event_id: 20079
statement_count: 19
  rows_affected: 4
     statements: SELECT c FROM sbtest1 WHERE id=5011
SELECT c FROM sbtest1 WHERE id=4994
SELECT c FROM sbtest1 WHERE id=5049
SELECT c FROM sbtest1 WHERE id=5048
SELECT c FROM sbtest1 WHERE id=4969
SELECT c FROM sbtest1 WHERE id=4207
SELECT c FROM sbtest1 WHERE id=4813
SELECT c FROM sbtest1 WHERE id=4980
SELECT c FROM sbtest1 WHERE id=4965
SELECT c FROM sbtest1 WHERE id=5160
SELECT c FROM sbtest1 WHERE id BETWEEN 4965 AND 4965+99
SELECT SUM(K) FROM sbtest1 WHERE id BETWEEN 3903 AND 3903+99
SELECT c FROM sbtest1 WHERE id BETWEEN 5026 AND 5026+99 ORDER BY c
SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 5015 AND 5015+99 ORDER BY c
UPDATE sbtest1 SET k=k+1 WHERE id=5038
UPDATE sbtest1 SET c='09521266577-73910905313-02504464680-26379112033-24268550394-82474773859-79238765464-79164299430-72120102543-79625697876' WHERE id=4979
DELETE FROM sbtest1 WHERE id=4964
INSERT INTO sbtest1 (id, k, c, pad) VALUES (4964, 5013, '92941108506-80809269412-93466971769-85515755897-68489598719-07756610896-31666993640-93238959707-66480092830-97721213568', '74640142294-85723339839-62552309335-30960818723-80741740383')
COMMIT

Of course there is not rules of thumb saying what is a transaction with too much statements, this is your role as DBA to analyze this and see how often such transaction could enter in conflict on multiple nodes at the same time.

You can also see the amount of conflicts per host using the following statement:

mysql> select COUNT_CONFLICTS_DETECTED from performance_schema.replication_group_member_stats;
+--------------------------+
| COUNT_CONFLICTS_DETECTED |
+--------------------------+
|                        4 |
+--------------------------+

Transactions with most rows affected

This is of course a more important value to get than the previous one and here we will directly now how many rows could enter in confict:

select t.thread_id, t.event_id, count(*) statement_count, 
       sum(s.rows_affected) rows_affected, 
       length(replace(group_concat(
       case 
         when s.event_name = "statement/sql/update" then 1 
         when s.event_name = "statement/sql/insert" then 1 
         when s.event_name = "statement/sql/delete" then 1 
         else null end),',','')) as "# write statements" 
from performance_schema.events_transactions_history_long t 
join performance_schema.events_statements_history_long s 
  on t.thread_id = s.thread_id and t.event_id = s.nesting_event_id 
group by t.thread_id, t.event_id order by rows_affected desc limit 10;

If you find some with a large amount of rows, you can again see what were the statements in that particular transaction. This is an example:

select t.thread_id, t.event_id, count(*) statement_count, 
       sum(s.rows_affected) rows_affected, 
       group_concat(sql_text order by s.event_id separator '\n') statements 
from performance_schema.events_transactions_history_long t 
join performance_schema.events_statements_history_long s 
  on t.thread_id = s.thread_id and t.event_id = s.nesting_event_id 
group by t.thread_id, t.event_id order by rows_affected desc limit 1\G

Just don’t forget to verify the auto_commit ones are they are not returned with the query above.

Largest statements by row affected

This query can be used to find the specific statement that modifies the most rows:

SELECT query, db, rows_affected, rows_affected_avg 
FROM sys.statement_analysis 
ORDER BY rows_affected_avg DESC, rows_affected DESC LIMIT 10;

Hotspots

For hotspots, we will look for the queries updating most the same PK and therefore having to wait more.

SELECT * 
FROM performance_schema.events_statements_history_long 
WHERE rows_affected > 1 ORDER BY timer_wait DESC LIMIT 20\G

Conclusion

As you can see, the workload is also important when you decide to spread your writes to all nodes or use only a dedicated one. The default is safer and requires less analysis.

Therefore, I recommend to use MySQL Group Replication in Multi-Primary Mode only to advanced users 😉


MySQL Group Replication: Videocast #1

$
0
0

Hi !

I’m starting a small series of videocast related to MySQL Group Replication.

As I blogged earlier this year about Single-Primary or Multi-Primary Group Replication cluster, one of the limitation of using a cluster in Multi-Primary mode is the risk linked to concurrent DDLs (as ALTER STATEMENTS).

In Group Replication, DDLs are not isolating the full cluster and write operations are not blocked. But this may lead to problems if you change the same table on two different nodes at the same time for example. This is the topic of today’s videocast:

MySQL Day – Sessions review #1

$
0
0

On February 3rd, just before Fosdem and the MySQL & Friends Devroom, MySQL’s Community Team is organizing the pre-Fosdem MySQL Day. We increased the amount of seats, so there are still some tickets available, don’t hesitate to register, this will provide you a unique occasion to meet MySQL engineers from Oracle in Europe. We also have very famous speakers from Percona and Booking.com.

Peter Zaitsev himself will deliver a session !

Let’s start this articles series on MySQL Day with my predecessor as MySQL Community Manager and now Product Manager for MySQL Server: Morgan Tocker !

Morgan will open the MySQL Day with a session on MySQL 8.0 Server Defaults:

Starting with MySQL 5.6 there has been a renewed focus in making sure that MySQL has a good out-of-the-box experience. This has resulted in changes such as strict mode being enabled, and sync_binlog on by default.
This presentation follows the evolution of the changes proposed for MySQL 8.0, first blogged about
here:

The session will explain the criteria by which we consider defaults, which settings we have considered so far, and which proposals we have chosen to defer for various reasons. We are looking for feedback on a number of these proposals, so audience questions are very welcome.

Registration is here !

Please use #prefosdem #mysqlday when tweeting 😉

 

Fossasia 2017 – Looking for MySQL Speakers

$
0
0

Fossasia 2017 will take place 17th – 19th March 2017 in Singapore.

Like previous edition, MySQL will sponsor this event. This year the organizers are looking for MySQL speakers. So if you are interested to speak about your favorite database and share your experience during Asia’s Premier Open Technology Event, please submit your talk using the following form: 2017.fossasia.org/speaker-registration

I know the deadline is over but they extended it a bit !

MySQL Day – Sessions review #2

$
0
0

As written yesterday, on February 3rd, just before Fosdem and the MySQL & Friends Devroom, MySQL’s Community Team is organizing the pre-Fosdem MySQL Day.

The second talk of this series is the session of Bernt Marius Johnsen: MySQL 8.0 & UnicodeWhat, why and how ?

Bernt is Senior QA Engineer in the MySQL Server Team. He’s working with QA in general but he’s also specialized in character sets and security issues. He also in charge of the development of  test tools automation.

MySQL 8.0 introduces a whole new set of Unicode collations based on Unicode 9.0 for the MySQL utf8mb4 character set. This comes in addition to the existing plethora of character sets and collations
in MySQL 5.7. What are the benefits of the new collations, and why should they be used? (And by the way: What is a collation?)

The talk will give a quick overview of MySQL 5.7 collations and character sets and related Unicode terminology before presenting the new MySQL 8.0 collations.

The session will focus on issues related to upgrades from older character sets and collations covering topics like space and speed considerations, consequences for indexes and key lengths and InnoDB storage formats, etc.

So if you are juggling with character sets all day long or if never changed them in MySQL, I think this session is made for you !

Bernt will be on stage at 10.30am, don’t forget to register for this main MySQL 8.0 event

 

MySQL Day – Sessions review #3

$
0
0

On February 3rd, just before Fosdem and the MySQL & Friends Devroom, MySQL’s Community Team is organizing the pre-Fosdem MySQL Day.

Today’s highlighted sessions are the one of Øystein Grøvlen:

  • MySQL 8.0: Common Table Expressions (CTEs)
  • Using Optimizer Hints to Improve MySQL Query Performance

Øystein is Senior Principal Software Engineer in the MySQL group at Oracle, where he works on the MySQL Query Optimizer.  

Dr. Grøvlen has a PhD in Computer Science from the Norwegian University of Science and Technology.  Before joining the MySQL team, he was a contributor on the Apache Derby project and Sun’s Architectural Lead on Java DB.  Prior to that, he worked for 10 years on development of Clustra, a highly available DBMS. Øystein lives in Trondheim, Norway.

Øystein is a regular speakear at events like Oracle Open World, Percona Live, Fosdem, …

So, let’s check the content of the two sessions he will deliver during pre-Fosdem MySQL Day.

The first session is at 11.00AM and is about Common Table Expressions (sometimes referred to as WITH queries).

This is a new feature that will be available in MySQL 8.0. In their simplest form, CTEs are a way of creating a view/temporary table for usage in a single query. This can help improve the readability of SQL code. However, they have many more use cases. In particular, when using the RECURSIVE form of CTEs, it is possible to perform advanced tasks with few lines of code. This session covers CTEs as supported in MySQL 8.0, and will present several examples on how you can benefit from using CTEs.

Øystein’s second session is a 3.00PM and is about MySQL Optimizer.

Some times you will experience that the MySQL Optimizer picks a non-optimal execution plan for your
query. For example, this may happen when the optimizer assumes a uniform distribution of column values while your actual data is skewed. Or when the optimizer’s cost model is based on assumptions about performance of hardware components that are inaccurate for your system. Optimizer hints may in such cases be used to influence the optimizer to choose a more optimal plan. This session will cover the different types of hints available in MySQL, and through several practical examples, it will be shown how using hints may improve query performance. The session will also cover the new optimizer hints that have been introduced in MySQL 5.7 and 8.0.

Don’t forget to register for this main MySQL 8.0 event

MySQL Group Replication, the perfect HA database backend for web hosting

$
0
0

Many web hosting provider are looking for HA solution for the database backend they deliver to their customers.

Galera never became the perfect choice for these environment due to 2 factors:

  1. no DBA really manage the databases
  2. Galera runs database changes in Total Order Isolation

What does that really mean ? In fact, when you are a website hosting provider, you host the website (apache, nginx) on vhosts and you share a database server in which every customer has access to their own schema for their website.

Most of the time, those websites are CMS like Drupal, WordPress or Joomla (and certainly many others sharing the same expectations).

Using these tools allows you to create and manage websites quickly and easily. However on a shared environment, you can’t expect that all users will use the same version of the CMS at the same time, neither the same plugins. Some may have customized the core or plugins of their favorite solution.

This means that the application itself takes care of database design and operations. So if one of the users decides to upgrade his WordPress (or add/remove a plugin that will create/modify some table scheme), on a Galera Cluster, he will lock all writes on ALL databases served by the cluster. All writes will be stalled for the total execution time of the DDLs that are part of wp-admin/includes/upgrade.php.

This upgrade of that particular website will then affect all other sites that are on the same system.

MySQL Group Replication doesn’t suffer from the same behavior and makes it the ideal solution to achieve High Availability for you database on a shared system.

The only problem you could encounter with MySQL Group Replication is if you use a cluster in Multi-Primary mode and preform concurrent DDLs as I explained on this videocast.

To avoid any problem, when using MySQL Group Replication in Multi-Primary node, it’s recommended to route all DDLs to the same node (this is not needed  if you use the default Single-Primary mode). You could filter out such statements using ProxySQL between your web servers and your database server.

 

MySQL Day – Sessions review #4

$
0
0

On February 3rd, just before Fosdem and the MySQL & Friends Devroom, MySQL’s Community Team is organizing the pre-Fosdem MySQL Day.

Today’s highlighted sessions are the one of Jean-Françcois Gagné, from Booking.com:

  • How Booking.com avoids and deals with replication lag at 12.05
  • Monitoring Booking.com without looking at MySQL at 15.30

Jean-François is working on growing the MySQL/MariaDB installations in Booking.com since he joined in 2013. His main task is focused on replication bottlenecks (and some other engineering problems too of course).  Jean-François works on improving Parallel Replication and deploys Binlog Servers.  He also has a good understanding of replication in general and a respectable understanding of InnoDB, Linux and TCP/IP.

In the fist talk, Jean-François doesn’t discuss about making replication faster, he will explain how to deal with the asynchronous nature of MySQL replication and therefore cover the (in-)famous lag.
During the session, he will start by quickly explaining the consequences of asynchronous replication and how/when lag
can happen. Then, Jean-François will present the solution used at Booking.com to avoid both creating lag and
minimize the consequence of stale reads on slaves (hint: this solution does not mean reading from the
master because this does not scale).

The second session is more different. I saw the first live presentation of it in Percona Live in Amsterdam and I really enjoy it. I don’t want to spoil it, so you will have to come to see Jean-François on stage sharing some “secrets” on how booking.com is monitored without looking at MySQL 😉

Don’t forget to register for this main MySQL event and for the MySQL Community Dinner that will happen on Saturday, February 4th just after the FOSDEM’s MySQL & Friends Devroom,


MySQL Day – Sessions review #5

$
0
0

On February 3rd, just before Fosdem and the MySQL & Friends Devroom, MySQL’s Community Team is organizing the pre-Fosdem MySQL Day.

Today we will review Dag H. Wanvik‘s session. Dag is spending most of his time implementing Windows Functions to MySQL.

Dag H. Wanvik is a senior MySQL developer for Oracle. Before he did Derby/Java DB development for Oracle/Apache Foundation and he is a Derby committer. He last co-authored JSON support for MySQL 5.7. In his previous existence Dag worked on several compilers and HA distributed data base systems.

The title of his session is MySQL 8.0: Window functions – finally!

Dag will share is current work on a much wanted feature to MYSQL: analytic SQL window functions. This presentation will give an introduction and quick overview of the feature set and example of interesting use cases, and also touch on implementation and optimization issues if time permits.

So as you can see yet another great feature planned for MySQL 8 !

If you want to attend this very special session and discuss with Dag, don’t forget to register for this main MySQL event. Dag will also attend the MySQL Community Dinner that will happen on Saturday, February 4th just after the FOSDEM’s MySQL & Friends Devroom.

MySQL Day – Sessions review #6

$
0
0

Let’s continue the review of the pre-FOSDEM MySQL Day‘s schedule. Today’s session is the one of Sveta Smirnova: What you wanted to know about your MySQL Server instance, but could not find using internal instrumentation only

Sveta Smirnova works as MySQL Support engineer since year 2006, she is also author of book “MySQL Troubleshooting” and author of JSON UDF functions for MySQL. In years 2006 – 2015 she worked in Bugs Analysis MySQL Support Group in MySQL AB, then Sun, then Oracle. In March 2015 Sveta joined Support Team in Percona. In years 2012-2015 she worked on bugs priority. She was also Support representative in MySQL Backup Development Team. She works on tricky support issues and MySQL software bugs on a daily basis. In years 2012-2015 she worked on MySQL Labs project “JSON UDFs for MySQL”. She is active participant in the open source community. Her main interests in recent years is solving DBA problems, finding ways to semi-automate this process and effective backup techniques.

As Sveta says, MySQL versions 5.6, 5.7 and 8.0 made life of DBA and MySQL Support engineers much easier. If even in version 5.5 we had to use operating system tools, debuggers and inject debugging code into our stored routines, with 5.7 this is not necessary. 8.0 did not provide us new instruments, but revolutionary new data dictionary implied huge improvements in existent instrumentation, so it became safe to use even in large production environments.

In this session Sveta will not discuss existent instrumentation, already well presented, but she will cover areas which are still not properly instrumented. She will mention what can and cannot do MySQL DBA to troubleshoot issues, falling in these areas.

I’m curious to see what Sveta will show us and certainly provide us valuable feedback where we could improve MySQL instrumentation even more.

This session will start at 15.50.

Don’t forget to register for this main MySQL event and for the MySQL Community Dinner that will happen on Saturday, February 4th just after the FOSDEM’s MySQL & Friends Devroom.

 

MySQL Day – Sessions review #7

$
0
0

Today I will present the unique ProxySQL session of the pre-FOSDEM MySQL Day. Alkin Tezuysal will share with the audience ProxySQL Use Case Scenarios

Alkin is Senior Technical Manager at Percona and has extensive experience in enterprise relational databases working in various sectors for large corporations. With more then 20 years of industry experience he has acquired skills for managing large projects from ground up to production. For the past six years he’s been focusing on e-commerce, SaaS and MySQL technologies. He managed and architected database topologies for high volume site at eBay Intl. He has several years of experience on 24X7 support and operational tasks as well as improving database systems for major companies.

ProxySQL aims to be the most powerful proxy in the MySQL ecosystem. It is protocol aware and able to provide high availability (HA) and high performance with no changes in the application, using several built-in features and integration with clustering software.

During this session Alkin will quickly introduce its main features, so to better understand how it works. Then, he will describe multiple use case scenarios in which ProxySQL empowers large MySQL installations to provide HA with zero downtime, read/write split, query rewrite, sharding, query caching, and multiplexing using SSL across data centers.

Don’t miss this talk if High Availability is your concern ! Alkin will be on stage at 16.20 !

There will be 3 other sessions about ProxySQL during FOSDEM’s MySQL & Friends Devroom.

The Proxy Wars – MySQL Router, ProxySQL, MariaDB MaxScale Colin Charles 14:05 14:25
Painless MySQL HA, Scalability and Flexibility with Ansible, MHA and ProxySQL Miklos Mukka Szel 14:35 14:55
Inexpensive Datamasking for MySQL with ProxySQL
data anonymization for developers
Frédéric Descamps, René Cannaò 15:05 15:25

Register for this main MySQL event and for the MySQL Community Dinner that will happen on Saturday, February 4th just after the FOSDEM’s MySQL & Friends Devroom.

MySQL Day – Sessions review #8

$
0
0

Let’s finish this pre-FOSDEM MySQL Day Sessions review week with Norvald Ryeng‘s talk on MySQL 8.0 and GIS.

As you know pre-FOSDEM MySQL Day will take place on Friday February 3rd in Brussels. During this day dedicated to MySQL and focusing on 8.0, Norvald will be on stage at 16.50 to check if you are ready for MySQL 8.0’s  GIS implementation.

Many great things are happening to GIS in MySQL 8.0. But in order to move forward, we also have to break legacy behavior. What will change? How? Why? And what can I do to avoid problems when I upgrade?

Join Norvald for a tour of changes and recommendations that you can start following today to make your data and applications ready for the future.
If you use GIS functionality in MySQL today, this is is a talk you shouldn’t miss! This talk is also for you if you’re interested in the future of GIS in MySQL, or just generally interested in how new functionality, legacy behavior, and upgrade problems are connected.

Norvald has been working as a software engineer on the MySQL Optimizer Team since 2011, working primarily with GIS. Also point of contact for package maintainers in Linux distributions.

During the pre-FOSDEM MySQL Day, a lot of MySQL Engineers will be present, on stage or in the audience, don’t hesitate to meet them and ask them all the questions you want. They will also be around during the MySQL and Friends Devroom at FOSDEM and at the amazing MySQL Community Dinner.

Register to this pre-FOSDEM MySQL Day !

pre-FOSDEM MySQL Day – change in the schedule

$
0
0

One of the talk will replaced in the schedule by a panel discussion moderated by Morgan Tocker on MySQL Group Replication & MySQL 8.0.

The panel list will be composed of

Kenny Gryp
Kenny Gryp
René Cannaò
René Cannaò
Øystein Grøvlen
Øystein Grøvlen
Mark Leith
Mark Leith
Frédéric Descamps
Frédéric Descamps

 

 

 

 

These experts will answer questions from Morgan and from the audience.

Don’t miss this great opportunity to ask your questions and participate to this discussion about MySQL.

The schedule:

Start End Event Speaker Company Topic

Friday 3rd February

09:30 10:00 Welcome !
10:00 10:25 MySQL 8.0: Server Defaults – An overview of what settings have changed or are under consideration Morgan Tocker Oracle MySQL 8.0
10:30 10:55 MySQL 8.0: Unicode – What, why and how Bernt Marius Johnsen Oracle MySQL 8.0
11:00 11:25 MySQL 8.0: Common Table Expressions (CTEs) Øystein Grøvlen Oracle MySQL 8.0
11:30 11:55 Group Replication Kenny Gryp Percona Group Replication
12:05 12:30 How Booking.com avoids and deals with replication lag Jean-François Gagné Booking.com Replication
12:30 13:15 Lunch
13:15 14:10 Panel discussion on MySQL Group Replication & MySQL 8.0 MySQL 8.0 & GR
14:15 14:40 MySQL 8.0: Window functions – finally! Dag H. Wanvik Oracle MySQL 8.0
14:45 15:00 Coffee Break
15:00 15:25 Using Optimizer Hints to Improve MySQL Query Performance Øystein Grøvlen Oracle MySQL 8.0
15:30 15:45 Monitoring Booking.com without looking at MySQL Jean-François Gagné Booking.com Fun, Sport, Not-MySQL
15:50 16:15 What you wanted to know about your MySQL Server instance, but could not find using internal instrumentation only Sveta Smirnova Percona Troubleshooting
16:20 16:45 ProxySQL Use Case Scenarios Alkin Percona ProxySQL
16:50 17:15 MySQL 8.0: GIS – Are you ready ? Norvald H. Ryeng Oracle MysQL 8.0

Don’t forget to register on eventbrite.

Viewing all 411 articles
Browse latest View live