Quantcast
Channel: lefred blog: tribulations of a MySQL Evangelist
Viewing all 411 articles
Browse latest View live

MySQL Day – Sessions review #9

$
0
0

Let’s finish these pre-FOSDEM MySQL Day Sessions reviews with Kenny Gryp‘s talk on MySQL Group Replication.

Kenny is working at Percona as MySQL Practice Manager.

Group Replication went Generally Available end of 2016, it introduces a (virtually) ‘synchronous’ active:active multi-master replication, in addition to asynchronous and semi-synchronous replication, the latter 2 being available in in MySQL for longtime.

As with any new feature, and especially with introducing active:active multi-master replication, it takes a while before companies are adopting the software in production database environment.
For example, even though MySQL 5.7 has been GA for more than a year, adoption is only starting to increase recently.

We can, and should, expect the same from Group Replication. As with every release, bugs will be found, and with new features, best practises still need to formed out of practical experience.

After giving a short introduction on what Group Replication is, Kenny will cover his experience so far in evaluating Group Replication.

Register for this main MySQL event and don’t forget the FOSDEM’s MySQL & Friends Devroom.

 


FOSDEM 2017 is over… this was again a great MySQL event !

$
0
0

FOSDEM 2017 is over, I brought back home the flu…. but hopefully not only !

Indeed this 2017 edition was very rewarding. We started our FOSDEM with a “fringe” : pre-FOSDEM MySQL Day where we highlighted MySQL 8.0 new features and hosted some talks from MySQL friends.

This first edition of the pre-FOSDEM MySQL Day was a great success. We had up to 70 attendees! I would like to thanks all the speakers: Morgan Tocker, Bernt Marius Johnsen, Øystein Grøvlen, Kenny Gryp, Jean-François Gagné, Dag H. Wanvik, Sveta Smirnova, Alkin Tezuysal, Norvald H. Ryeng, Mark Leith and René Cannaó.

I also want to thank Dim0, Flyer and Kenny for their precious help organizing the room.

These are the slides of the sessions presented during the MySQL Day:

Then on Saturday, we had the MySQL & Friends Devroom at FOSDEM and to be honest, I never saw the room so full for the entire day, even the last session that was at 18.30 was still busy with people standing in the back of the room and in the stairs on the side !


All sessions were streamed live and recorded. You can find the videos on http://ftp.osuosl.org/pub/fosdem/2017/H.1309/

We ended this amazing day with the famous MySQL & Friends Community Dinner where once again the hosts of the day made a great job ! Thank you Melinda, Dim0, Kenny and Flyer.

Thank you also to the sponsors !

 

During the FOSDEM week-end, the MySQL Team was also present at our stand to answer questions and presenting our new features like MySQL 8.0, Group Replication and InnoDB Cluster.

Our Mark Leith has been interviewed by Ken Fallon for Hacker Public Radio and can be listened on https://video.fosdem.org/2017/stands/H.7_MySQL.flac

During the week-end, our MySQL Group Replication engineers were solicited a lot by all community members and it seems many people are evaluating it already ! Thank you for all the feedback we got ! ProxySQL was also a very hot topic, good job René !

This FOSDEM 2017 edition was great, I am already looking forward the next edition even if it will be hard to do better ! See you next year !!

MySQL Group Replication… synchronous or asynchronous replication ?

$
0
0

After some feedback we received from early adopters or discussions during events like FOSDEM, I realized that there is some misconception about the type of replication that MySQL Group Replication is using. And even experts can be confused as Vadim’s blog post illustrated it.

So, is MySQL Group Replication asynchronous or synchronous ??

… in fact it depends !

The short answer is that GR is asynchronous. The confusion here can be explained by the comparison with Galera that claims to be synchronous or virtually synchronous depending where and who claims it (Synchronous multi-master replication library, synchronous replicationscalable synchronous replication solutionenables applications requiring synchronous replication of data, …) . But GR and Galera are not more synchronous than the other.

The more detailed answer is that it depends what do you call “replication”. In fact for years, in the MySQL world, replication defined the process of writing (or changing or deleting) data to a master and the appearance of that data on the slave. The full process is what we called replication. The fact of writing data on a master, adding that change in the binary log, sending it on the relay log of a slave and the slave applying that change… So “replication” is in fact 5 different steps:

  1. locally applying
  2. generating a binlog event
  3. sending the binlog event to the slave(s)
  4. adding the binlog event on the relay log
  5. applying the binlog event from the relay log

And indeed, in MySQL Group Replication and in Galera (even if binlog and relay log files are mostly replace by the galera cache), only the step #3 is synchronous… and in fact this step is the streaming of the binlog event (write set) to the slave(s)… the replication of the data to the other nodes.

So yes the process of sending (replicating, streaming) the data to the other nodes is synchronous. But the applying of these changes is still completely asynchronous.

For example if you create a large transaction (which is not recommended neither in InnoDB, Galera and Group Replication) that modifies a huge amount of records, when the transaction is committed, a huge binlog event is created and streamed everywhere. As soon as the other nodes of the cluster/group acknowledge the reception of the binlog event, the node where the transaction was created returns “success” to the client and the data on that particular node is ready. Meanwhile all the other nodes need to process the huge binlog and make all the necessary data modification…. and this can take a lot of time. So yes, if you try to read the data that is part of that huge transaction on another node than the one where the write was done… the data won’t be there immediately. Bigger is the transaction longer you will have to wait for your data to be applied on the slave(s).

Let’s check with some pictures to try to make this more clear, considering the vertical axis is Time, :

We have a MySQL Group Replication cluster of 3 nodes and we start a transaction on node1
we add some statements to our transaction…

we commit the transaction and  binary log events are generated
those binlog events are streamed/delivered synchronously to the other nodes and as soon as everybody (*) ack the reception of the binlog events, each node starts certifying them as soon as they can… but independently
certification can start as son as the transaction is received
when certification is done, on the writer, there is no need to wait for anything else from the other nodes and the commit result is sent back to the client
every other nodes consume from the apply queue the changes and start to apply them locally. This is again an asynchronous process like it was for certification
you can see that the transaction is committed on every node at different time

If you perform a lot of large transactions and you want to avoid inconsistent reads, with MySQL Group Replication, you need to wait by yourself and check if there is still some transaction to apply in the queue or verify the last GTID executed to know if the data you modified is present or not where you try to read it. By default this is the same with Galera. However, Galera implemented sync_wait that force the client to wait (until a timeout) for all the transaction in the apply queue to be executed before the current one.

The only synchronous replication solution for the moment is still MySQL Cluster, aka NDB.

 

(*) on Group Replication majority is enough.

MySQL Group Replication: about ack from majority

$
0
0

The documentation states that “For a transaction to commit, the majority of the group have to agree on the order of a given transaction in the global sequence of transactions.

This means that as soon as the majority of nodes member of the group ack the writeset reception, certification can start. So, as a picture is worth a 1000 words, this is what it looks like if we take the illustrations from my previous post:

a group of 3 members

zoom in transaction deliver
the writer also acks
majority is reached, the system agreed on the order
ack of the remaining node will come too but the order has been already decided

 

certification can start
the process then continues as usual

So theoretically, having 2 nodes in one DC and 1 node in another DC shouldn’t be affected by the latency between both sites if writes are happening on the DC with the majority of nodes. But this is not the case.

As you can see on the video above, every 3rd write to the system is affected by the latency. That’s because the system has to wait for the noop (single skip message) from the “distant” node. As Alfranio explained it in his blog post about our homegrown paxos based consensus, XCom is a multi-leader or more precisely a multi-proposer solution. In this protocol, every member has an associated unique number and a reserved slot in the stream of totally ordered messages. There is no leader election and each member is a leader of its own slots in the stream of messages. Members can propose messages for their slots without having to wait for other members. But if they don’t have anything to say, they need to tell it too and this is where we are affected by the latency.

So in conclusion, having a distant node (or with higher latency) as member of a Group slows down the full workload. Not constantly but at least in correlation with its ratio on the cluster. 1/3 for a 3 nodes cluster for example.

At least for now, if you need to have a distant node and using it only for reads, I would advice to use asynchronous replication between your two sites or being prepare to pay the cost of the latency.

 

MySQL InnoDB Cluster: MySQL Shell starter guide

$
0
0

 Earlier this week,  MySQL Shell 1.08 has been released. This is the first Release Candidate of this major piece of MySQL InnoDB Cluster.

Some commands have been changed and some new ones were added.

For example the following useful commands were added:

  • dba.checkInstanceConfiguration()
  • cluster.checkInstanceState()
  • dba.rebootClusterFromCompleteOutage()

So let’s have a look on how to use the new MySQL Shell to create a MySQL InnoDB Cluster.

Action Plan

We have 3 blank Linux servers: mysql1, mysql2 and mysql3 all running rpm based Linux version 7 (Oracle Linux 7, CentOS 7, …).

We will install the required MySQL yum repositories and install the needed packages

We will use MySQL Shell to setup our MySQL InnoDB Cluster.

Packages

To be able to install our cluster, we will first install the repository from the MySQL release package. For more information related to MySQL’s installation or if you are using another OS, please check our online documentation.

On all 3 servers, we do:

# rpm -ivh https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
# yum install -y mysql-community-server

The commands above, will install the MySQL Community yum repositories and install MySQL Community Server 5.7.17, being the latest GA version at the date of this post.
Now we will have to install the Shell. As this tool is not yet GA, we need to use another repository that has been installed but not enabled: mysql-tool-preview

# yum install -y mysql-shell --enablerepo=mysql-tools-preview

We are done with the installation. Now let’s initialize MySQL and start it.

Starting MySQL

Before being able to start MySQL, we need to create all necessary folders and system tables. This process is called MySQL Initialization. Let’s proceed without generating a temporary root password as it will be easier and faster for the demonstration. However, I highly recommend you to use a strong root password.

When the initialization is done, we can start MySQL. So on all the future nodes, you can proceeds like this:

# mysqld --initialize-insecure -u mysql --datadir /var/lib/mysql/
# systemctl start mysqld
# systemctl status mysqld

MySQL InnoDB Cluster Instances Configuration

We have now everything we need to start working in the MySQL Shell to configure all the members of our InnoDB Cluster.

First, we will check the configuration of one of our MySQL server. Some changes are required, we will perform them using the Shell and we will restart mysqld:

# mysqlsh
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')
...
mysql-js> dba.configureLocalInstance()
... here please create a dedicated user and password to admin the cluster (option 2) ...
mysql-js> \q

# systemctl restart mysqld

Now MySQL has all the required mandatory settings to run Group Replication. We can verify the configuration again in the Shell with dba.checkInstanceConfiguration() function.

We have now to proceed the same way on all the other nodes, please use the same credentials when you create the user to manage your cluster,  I used ‘fred@%’  as example.  As you can’t configure remotely a MySQL Server, you will have to run the Shell locally on every node to be able to run dba.configureLocalInstance() and restart mysqld.

MySQL InnoDB Cluster Creation

Now that all the nodes have been restarted with the correct configuration, we can create the cluster. On one of the instances, we will connect and create the cluster using again the Shell, I did it on mysql1 and I used its ip as it’s name resolves also on the loopback interface:

# mysqlsh
mysql-js> var i1='fred@192.168.90.2:3306'
mysql-js> var i2='fred@mysql2:3306'
mysql-js> var i3='fred@mysql3:3306'
mysql-js> shell.connect(i1)
mysql-js> var cluster=dba.createCluster('mycluster')
mysql-js> cluster.status()
...

We can now validate that the dataset on the other instances is correct (no extra transactions executed). This is done by validating the GTIDs. This can be done remotely, so I will still use the MySQL Shell session I’ve open on mysql1:

mysql-js> cluster.checkInstanceState(i2)
mysql-js> cluster.checkInstanceState(i3)

When the validation is passed successfully, it’s time to add the two other nodes to our cluster:

mysql-js> cluster.addInstance(i2)
mysql-js> cluster.addInstance(i3)
mysql-js> cluster.status()

Perfect ! We used MySQL Shell to create this MySQL InnoDB Cluster.

Now let’s see this on video with all  the output of the commands:

In the next post, I will show you how to use the Shell to automate the creation of a MySQL InnoDB Cluster using Puppet.

MySQL InnoDB Cluster: Automated Installation with Puppet

$
0
0

We saw yesterday that the new MySQL Shell was out and how we could create a MySQL InnoDB Cluster manually using the Shell.

Today, I would like to show you how easy it is to create recipes to automate all the process. I have created a Puppet module that can be used as Proof-of-concept (You might need more features to use it in production, feel free to fork it).

The module can be found on this github repo.

When using Puppet, I really like to put all configuration in hiera.

Environment

We have 3 GNU/Linux servers: mysql1, mysql2 and mysql3.

We won’t install anything related to MySQL manually, everything will be handled by Puppet.

Nodes definition

So, we will define our classes and parameters in hiera.

We need to specify that our classes will be defined in hiera:

manifests/site.pp

hiera_include('classes')
node mysql1 {
}
node mysql2 {
}
node mysql3 {
}

This is the content of our hiera.yaml:

---
:backends:
  - yaml
:yaml:
  :datadir: /vagrant/hieradata
:hierarchy:
  - "%{::hostname}"
  - "%{::operatingsystem}"
  - common

We will have a common yaml file defining common parameters to all cluster nodes like the credentials and the seed, which point to the node we will use to bootsrap the group (see how to launch Group Replication).

common.yaml

---
 innodbcluster::mysql_root_password: fred
 innodbcluster::mysql_bind_interface: eth1
 innodbcluster::cluster_name: mycluster
 innodbcluster::grant::user: root
 innodbcluster::grant::password: fred
 innodbcluster::seed: mysql1

And finally every nodes need to have the class defined (it could also be defined in common.yaml) and the unique server_id:

mysql1.yaml:

---
classes:
- innodbcluster

innodbcluster::mysql_serverid:  1

mysql2.yaml:

---
classes:
- innodbcluster

innodbcluster::mysql_serverid:  2

mysql3:yaml:

---
classes:
- innodbcluster

innodbcluster::mysql_serverid:  3

And this is it !!

Easy isn’t it ?

The video below illustrates the full deployment of a MySQL InnoDB Cluster of 3 nodes using Puppet:

As usual, your feedback is important, don’t hesitate to submit bugs and features requests to https://bugs.mysql.com/ and if you like it, you can tell it too 😉

Jeudis du Libre – Mons

$
0
0

Yesterday I was invited to speak at the “Jeudis du Libre” in Mons.

The location was very special as it was in one auditorium of Polytech, the oldest university in the city of Mons.

I presented in French two very hot topics in the MySQL ecosystem:

  • MySQL InnoDB Cluster
  • MySQL as Document Store with JSON datatype & X plugin

Those are very new technologies illustrating MySQL’s innovation. And of course there is much more to come with MySQL 8 !

Here are the slides if you are interested:

I also recommend you to attend future sessions at Jeudis du Libre, some might be very interesting. I plan to participate again once MySQL 8.0 is released.

MySQL Group Replication: who is the primary master ??

$
0
0

As you know, MySQL Group Replication runs by default in single primary mode.

mysql2 mysql> show global variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON    |
+---------------------------------------+-------+

But how can we easily find which member of the group is the Primary-Master ?

Of course you could check which one is not in read_only:

mysql2 mysql> select @@read_only;
+-------------+
| @@read_only |
+-------------+
|           1 |
+-------------+

But then you need to perform this on all the nodes one by one until you find the right one.

The primary master is exposed through a status variable: group_replication_primary_member:

mysql2 mysql> show global status like 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | f7aa830d-0f02-11e7-83ba-08002718d305 |
+----------------------------------+--------------------------------------+

But the value is not that obvious to know which MySQL server it refers to.

Once again, as we are able to know who are the members in the group via Performance_Schema, we can verify this:

mysql2 mysql> select * from performance_schema.replication_group_members\G                                            
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 73f48dcc-0f02-11e7-99b8-08002718d305
 MEMBER_HOST: mysql3
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: f7aa830d-0f02-11e7-83ba-08002718d305
 MEMBER_HOST: mysql1
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: fbe0b0a1-0f02-11e7-a1e5-08002718d305
 MEMBER_HOST: mysql2
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

So we can here see that f7aa830d-0f02-11e7-83ba-08002718d305 is in fact mysql1 !

Can we merge all this in one single query ? Of course !

Here is the query you need to use:

mysql2 mysql> SELECT member_host as "primary master"
              FROM performance_schema.global_status         
              JOIN performance_schema.replication_group_members         
              WHERE variable_name = 'group_replication_primary_member'         
                AND member_id=variable_value;
+----------------+
| primary master |
+----------------+
| mysql1         |
+----------------+

Simple and useful tip 😉


MySQL Group Replication: native support in ProxySQL

$
0
0

ProxySQL is the leader in proxy and load balancing solution for MySQL. It has great features like query caching, multiplexing, mirroring, read/write splitting, routing, etc… The latest enhancement in ProxySQL is the native support of MySQL Group Replication. No more need to use an external script within the scheduler like I explained in this previous post.

This implementation supports Groups in Single-Primary and in Multi-Primary mode. It is even possible to setup a Multi-Primary Group but dedicate writes on only one member.

René, the main developer of ProxySQL, went even further. For example in a 7 nodes clusters (Group of 7 members) where all nodes are writers (Multi-Primary mode), it’s possible to decide to have only 2 writers, 3 readers and 2 backup-writers. This mean that ProxySQL will see all the nodes as possible writers but will only route writes on 2 nodes (add them in the  writer hostgroup, because we decided to limit it to 2 writers for example), then it will add the others in the backup-writers group, this group defines the other writer candidates. An finally add 2 in the readers hostgroup.

It’s also possible to limit the access to a member that is slower in applying the replicated transactions (applying queue reaching a threshold).

It is time to have a look at this new ProxySQL version. The version supporting MySQL Group Replication is 1.4.0 and currently is only available on github (but stay tuned for a new release soon).

So let’s have a look at what is new for users. When you connect to the admin interface of ProxySQL, you can see a new table: mysql_group_replication_hostgroups

ProxySQL> show tables ;
+--------------------------------------------+
| tables                                     |
+--------------------------------------------+
| global_variables                           |
| mysql_collations                           |
| mysql_group_replication_hostgroups         |
| mysql_query_rules                          |
| mysql_replication_hostgroups               |
| mysql_servers                              |
| mysql_users                                |
...
| scheduler                                  |
+--------------------------------------------+
15 rows in set (0.00 sec)

This is the table we will use to setup in which hostgroup a node will belongs.

To illustrate how ProxySQL supports MySQL Group Replication, I will use a cluster of 3 nodes:

name ip
mysql1 192.168.90.2
mysql2 192.168.90.3
mysql3 192.168.90.4

So first, as usual we need to add our 3 members into the mysql_servers table:

mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.2',3306);
Query OK, 1 row affected (0.00 sec)

mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.3',3306);
Query OK, 1 row affected (0.00 sec)

mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.4',3306);
Query OK, 1 row affected (0.00 sec)


mysql> select * from mysql_servers;
+--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname     | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 2            | 192.168.90.2 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.90.3 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | 192.168.90.4 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+

Now we can setup ProxySQL’s behavior with our Group Replication cluster, but before let’s check the definition of the new mysql_group_replication_hostgroups table:

ProxySQL> show create table mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
       table: mysql_group_replication_hostgroups
Create Table: CREATE TABLE mysql_group_replication_hostgroups (
    writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
    backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
    reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
    offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
    writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1)) NOT NULL DEFAULT 0,
    max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
    comment VARCHAR,
    UNIQUE (reader_hostgroup),
    UNIQUE (offline_hostgroup),
    UNIQUE (backup_writer_hostgroup))

There are many new columns, let’s have a look at their meaning:

Column Name Description
writer_hostgroup the id of the hostgroup that will contain all the members that are writer
backup_writer_hostgroup if the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is
larger than the max_writers, the extra nodes are located in that backup writer group
reader_hostgroup the id of the hostgroup that will contain all the members in read_only
offline_hostgroup the id of the hostgroup that will contain the host not being online or not being part of the Group
active when enabled, ProxySQL monitors the Group and move the server according in the appropriate hostgroups
max_writers limit the amount of nodes in the writer hostgroup in case of group in multi-primary mode
writer_is_also_reader boolean value, 0 or 1, when enabled, a node in the writer hostgroup will also belongs the the reader hostgroup
max_transactions_behind if the value is greater than 0, it defines how much a node can be lagging in applying the transactions from the Group, see this post for more info

Now that we are (or should be) more familiar with that table, we will set it up like this:

So let’s add this:

ProxySQL> insert into mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,
reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) 
values (2,4,3,1,1,1,0,100);

We should not forget to save our mysql servers to disk and load them on runtime:

ProxySQL> save mysql servers to disk;
Query OK, 0 rows affected (0.01 sec)

ProxySQL> load mysql servers to runtime;
Query OK, 0 rows affected (0.00 sec)

It’s also important with the current version of MySQL Group Replication to add a view and its dependencies in sys schema: addition_to_sys.sql:

# mysql -p < addition_to_sys.sql

So now from every members of the group, we can run the following statement. ProxySQL based its internal monitoring this same view:

mysql> select * from gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                  40 |                    0 |
+------------------+-----------+---------------------+----------------------+

We also must not forget to create in our cluster the monitor user needed by ProxySQL:

mysql> GRANT SELECT on sys.* to 'monitor'@'%' identified by 'monitor';

We can immediately check how ProxySQL has distributed the servers in the hostgroups :

ProxySQL>  select hostgroup_id, hostname, status  from runtime_mysql_servers;
+--------------+--------------+--------+
| hostgroup_id | hostname     | status |
+--------------+--------------+--------+
| 2            | 192.168.90.2 | ONLINE |
| 3            | 192.168.90.3 | ONLINE |
| 3            | 192.168.90.4 | ONLINE |
+--------------+--------------+--------+

The Writer (Primary-Master) is mysql1 (192.168.90.2 in hostgroup 2) and the others are in the read hostgroup (id=3).

As you can see, there is no more need to create a scheduler calling an external script with complex rules to move the servers in the right hostgroup.

Now to use the proxy, it’s exactly as usual, you need to create users associated to default hostgroup or add routing rules.

An extra table has also been added for monitoring:

ProxySQL> SHOW TABLES FROM monitor ;
+------------------------------------+
| tables                             |
+------------------------------------+
| mysql_server_connect               |
| mysql_server_connect_log           |
| mysql_server_group_replication_log |
| mysql_server_ping                  |
| mysql_server_ping_log              |
| mysql_server_read_only_log         |
| mysql_server_replication_lag_log   |
+------------------------------------+
7 rows in set (0.00 sec)

ProxySQL> select * from mysql_server_group_replication_log order by time_start_us desc  limit 5 ;
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
| hostname     | port | time_start_us    | success_time_us | viable_candidate | read_only | transactions_behind | error |
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
| 192.168.90.4 | 3306 | 1490187314429511 | 1887            | YES              | NO        | 0                   | NULL  |
| 192.168.90.3 | 3306 | 1490187314429141 | 1378            | YES              | YES       | 0                   | NULL  |
| 192.168.90.2 | 3306 | 1490187314428743 | 1478            | NO               | NO        | 0                   | NULL  |
| 192.168.90.4 | 3306 | 1490187309406886 | 3639            | YES              | NO        | 0                   | NULL  |
| 192.168.90.3 | 3306 | 1490187309406486 | 2444            | YES              | YES       | 0                   | NULL  |
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+

Enjoy MySQL Group Replication & ProxySQL !

lefred.be is part of the TOP 10 MySQL Blogs

MySQL Challenge

$
0
0

The MySQL team is constantly improving and innovating the product… this time, after having put particular attention to DBAs and operators with Group Replication & InnoDB Cluster,  let’s have a look to what we can also bring to developers.

Let’s start with the following challenge:

Add 5 books from scratch with all their metadata (authors, isbn, edition, year, …) in last than 5 minutes using only MySQL Shell in command line !

Do you thinks this is not possible ? Check the video below:

Amazing isn’t it ?

If you want to learn more about this, please attend Mike Zinner & Alfredo Kojima’s session at Percona Live next week:

OpenWorld 2017 Call for Papers Closing Soon!

$
0
0

Time is running out to submit a talk for OpenWorld 2017 that will table place on October 1–5, 2017 in San Francisco, CA.  If you are looking for inspiration on what talk to submit, we encourage you to submit a case study.  We would love to hear how you are using MySQL as part of your data platform, and your experiences in upgrading to InnoDB Cluster with MySQL 5.7.  Talks on how you are testing MySQL 8.0 are also welcome 🙂

The call for papers will be open until May 1st 2017Please submit now!

MySQL Group Replication and logical backup

$
0
0

Taking a logical backup of a member of a Group Replication Cluster is not something very easy.

Currently (5.7.17, 5.7.18 or 8.0.0) if you want to use mysqldump to take a logical backup of your dataset, you need to lock all the tables on the member you are taking the dump. Indeed, a single transaction can’t be used as savepoints are not compatible with Group Replication.

[root@mysql3 ~]# mysqldump -p  --single-transaction --all-databases --triggers \
                      --routines --events >dump.sql
Enter password:
mysqldump: Couldn't execute 'SAVEPOINT sp': The MySQL server is running with the 
--transaction-write-set-extraction!=OFF option so it cannot execute this 
statement (1290)

So we need to use:

[root@mysql3 ~]# mysqldump -p  --lock-all-tables --all-databases --triggers \
                      --routines --events >dump.sql
Enter password:

This can have a negative effect on the full Group’s performance as the member having all the tables locked might start sending statistics reaching the flow control threshold. Currently we don’t have any way to decide to ignore those statistics for a given node.

The replication development team was of course aware of this problem (reported in bug 81494) and decided to support savepoints with group replication too.

Anibal blogged yesterday about this new improvement.

So with MySQL 8.0.1, we can now take a logical backup using a single transaction:

[root@mysql1 ~]# mysql -p -e "select @@version";
Enter password:
+---------------+
| @@version     |
+---------------+
| 8.0.1-dmr-log |
+---------------+
[root@mysql1 ~]# mysqldump -p  --single-transaction --all-databases --triggers \
                 --routines --events >dump.sql
Enter password:

Wooohoo, it works ! Good Job replication team ! Savepoints can now be considered as NOT being a LIMITATION anymore for MySQL Group Replication !

MySQL Shell: eye candy for a future release !

$
0
0

 

Today I presented MySQL InnoDB Cluster at the Helsinki MySQL User Group.

To demonstrate how easy it’s to deploy a cluster with MySQL Shell and used the prompt that will be part of a future release just because it’s beautiful.

If you also want to see how it looks like, just check the video below:

There were several MongoDB users in the audience and I got only very positive feedback, they were very surprised how easy it’s to deploy a MySQL InnoDB Cluster these days !

Migration from MySQL Master-Slave pair to MySQL InnoDB Cluster: howto

$
0
0

MySQL InnoDB Cluster (or only Group Replication) becomes more and more popular. This solution doesn’t attract only experts anymore. On social medias, forums and other discussions, people are asking me what it the best way to migrate a running environment using traditional asynchronous replication [Master -> Slave(s)] to InnoDB Cluster.

The following procedure is what I’m actually recommending. These steps have for objective to reduce the downtime to the minimum for the database service.

We can divide the procedure in 9 steps:

  1. the current situation
  2. preparing the future cluster
  3. data transfert
  4. replication from current system
  5. creation of the cluster with a single instance
  6. adding instances to the cluster
  7. configure the router
  8. test phase
  9. pointing the application to the new solution

1. the current situation

Our application connects to mysql1 which also acts as master for mysql2. mysql3 and mysql4 are spare servers that will be used for the new MySQL InnoDB Cluster.

The final architecture will be having a MySQL InnoDB Cluster group of 3 machines: mysql2, mysql3 and mysql4.

2. preparing the cluster

The current Master-Slave setup musts use GTID. (so at least using 5.6)

MySQL >= 5.7.17 must be used for the InnoDB Cluster group members.

Read the Group Replication’s requirements & limitations:

So, on mysql3 and mysql4, we only need to install MySQL >=5.7.17.

There are also two different approaches to create such cluster:

  1. create it manually and then use MySQL Shell to create the metadata needed by MySQL-router
  2. let’s do everything using MySQL Shell

We will of course use the second option.

3. data transfer

As now, the provisioning of a new member is like any other type of MySQL Replication when you need to provision a new slave, a manual operation. Use a backup !

Group Replication is “just” another type of MySQL Replication, therefor we need to use the same concept. Of course we understand that everybody would benefit from an automatic provisioning process, but we don’t have such solution.

The backup must be consistent and provide the GTID of the last transaction being part of the backup.

You can use any option you want, logical backup with mysqldump, physical backup with MEB or Xtrabackup, etc..

I will use MEB to illustrate the different operations.

backup:

Let’s take a backup on mysql1:

[mysql1 ~]# mysqlbackup --host=127.0.0.1 --backup-dir=/tmp/backup \
                        --user=root --password=X backup-and-apply-log

Of course we could have taken the backup from mysql2 too.

transfer:

We need to copy the backup from mysql1 to mysql3:

[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp

restore:

Be sure that mysqld is not running on mysql3:

[mysql3 ~]# systemctl stop mysqld

Now, it’s time to restore the backup on mysql3, this consists to a simple copy-back and ownership change:

[mysql3 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql3 ~]# rm /var/lib/mysql/mysql*-bin*  # just some cleanup
[mysql3 ~]# chown -R mysql. /var/lib/mysql

4. replication from current system

At the end of this section we will then have just a normal asynchronous slave.

We need to verify MySQL’s configuration to be sure that my.cnf is configured properly to act as a slave:

[mysqld]
...
server_id=3
enforce_gtid_consistency = on
gtid_mode = on
log_bin
log_slave_updates

Let’s start mysqld:

[mysql3 ~]# systemctl start mysqld

Now, we need to find the latest GTIDs purged from the backup and set it. Then we will have to setup asynchronous replication and start it. We will then have live data from production replicated to this new slave.

The location where to find this information will depend of your backup solution.

Using MEB, the latest purged GTIDs are found in the file called backup_gtid_executed.sql :

[mysql3 ~]# cat /tmp/backup/meta/backup_gtid_executed.sql
# On a new slave, issue the following command if GTIDs are enabled:
  SET @@GLOBAL.GTID_PURGED='33351000-3fe8-11e7-80b3-08002718d305:1-1002';

# Use the following command if you want to use the GTID handshake protocol:
# CHANGE MASTER TO MASTER_AUTO_POSITION=1;

Let’s connect to mysql on mysql3 and setup replication:

mysql> CHANGE MASTER TO MASTER_HOST="mysql1",
       MASTER_USER="repl_async", MASTER_PASSWORD='Xslave',
       MASTER_AUTO_POSITION=1;
mysql> RESET MASTER;
mysql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";
mysql> START SLAVE;

The credentials used for replication must be present in mysql1.

Using SHOW SLAVE STATUS; you should be able to see the new slave is replicating.

5. creation of the cluster with a single instance

It’s finally time to use MySQL Shell ! 😉

You can run THE shell using mysqlsh:

[mysql3 ~]# mysqlsh

Now we can verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306': 
Validating instance...

The instance 'mysql3:3306' is not valid for Cluster usage.

The following issues were encountered:

 - Some configuration options need to be fixed.

+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+


Please fix these issues , restart the serverand try again.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

during this process, the configuration if parsed to see if all required settings are present.

By default, some settings are missing or need to be changed, we can ask the shell to perform the changes for us:

mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306': 

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: y
Validating instance...

The configuration has been updated but it is required to restart the server.

{
    "config_errors": [
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

This command works only if you plan to modify the configuration of the local instance (as the name of the function tells it). So when you need to configure multiple members of a cluster, you need to connect to each nodes independently and locally to them.

As the command returned it, we need now to restart mysqld to enable the new configuration settings:

[mysql3 ~]# systemctl restart mysqld

We can now connect again with the shell, verify again the configuration and finally create the cluster:

mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password: 
Your MySQL connection id is 6
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306': 
Validating instance...

The instance 'mysql3:3306' is valid for Cluster usage
{
    "status": "ok"
}
mysql-js> cluster = dba.createCluster('MyInnoDBCluster')
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

Creating InnoDB cluster 'MyInnoDBCluster' on 'root@mysql3:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

<Cluster:MyInnoDBCluster>

We can verify the status of our single node cluster using once again the shell:

mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

We have one cluster that is running but which is of course not tolerant to any failure.

This is how our architecture looks like now:

6. adding instances to the cluster

The goal is to have a Cluster of 3 nodes (or a Group of 3 Members). Now we will add mysql4 using the same backup we used for mysql3 and using the same procedure.

transfer:

[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp

restore:

[mysql4 ~]# systemctl stop mysqld
[mysql4 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql4 ~]# rm /var/lib/mysql/mysql*-bin*  # just some cleanup
[mysql4 ~]# chown -R mysql. /var/lib/mysql

This time, no need to modify the configuration manually, we will later use the shell for that. So we can simply start mysqld:

[mysql4 ~]# systemctl start mysqld

Let’s use the shell to join the Group:

mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password: 
Your MySQL connection id is 27
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set on
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...
The instance 'mysql4:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| enforce_gtid_consistency         | OFF           | ON             | Restart the server                               |
| gtid_mode                        | OFF           | ON             | Restart the server                               |
| log_bin                          | 0             | 1              | Restart the server                               |
| log_slave_updates                | 0             | ON             | Restart the server                               |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+

Please fix these issues , restart the server and try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}

We can let the shell configure it:

mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306': 

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: y
Validating instance...

The configuration has been updated but it is required to restart the server.

{
    "config_errors": [
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "0", 
            "option": "log_bin", 
            "required": "1"
        },
        {
            "action": "restart", 
            "current": "0", 
            "option": "log_slave_updates", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

Restart the service to enable the changes:

[mysql4 ~]# systemctl restart mysqld

We will use again the same purged GTIDs as previously (remember it’s in /tmp/backup/meta/backup_gtid_executed.sql):

mysql-js> \c root@mysql4:3306
mysql-js> \sql
mysql-sql> RESET MASTER;
mysql-sql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";

We used the shell to illustrate how SQL mode can be used with it too.

We are ready to add mysql4 to the Group:

mysql-sql> \js
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...

The instance 'mysql4:3306' is valid for Cluster usage
{
"status": "ok"
}

Now we need to connect to a node that is already member of the group to load the cluster object (get the metadata of the cluster):

mysql-js> \c root@mysql3:3306
 Creating a Session to 'root@mysql3:3306'
 Enter password:
 Your MySQL connection id is 29
 Server version: 5.7.18-log MySQL Community Server (GPL)
 No default schema selected; type \use <schema> to set one.
 mysql-js> cluster = dba.getCluster()
 <Cluster:MyInnoDBCluster>

Now we can check if the node that we want to add is consistent with the transactions that have been applied (verify the GTIDs):

mysql-js> cluster.checkInstanceState('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Analyzing the instance replication state...

The instance 'mysql4:3306' is valid for the cluster.
The instance is fully recoverable.

{
"reason": "recoverable",
"state": "ok"
}

This is perfect, we can add the new member (mysql4):

mysql-js> cluster.addInstance("root@mysql4:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@mysql4:3306': 
Adding instance to the cluster ...

The instance 'root@mysql4:3306' was successfully added to the cluster.

Let’s verify this:

mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

Great !

Now let’s add mysql2, where we don’t need any backup as the data is already present.

The first step on mysql2 is to stop the running slave thread(s) (io and sql) and then forget completely about this asynchronous replication:

mysql2 mysql> stop slave;
mysql2 mysql> reset slave all;

It’s the right moment to add it on the cluster using mysql shell, first we need to check the configuration:

[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql2:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> dba.configureLocalInstance()
[mysql2 ~]# systemctl restart mysqld

and now we can add mysql2 to the cluster:

[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql3:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.addInstance("root@mysql2:3306")
mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

It’s possible to run dba.configureLocalInstance() on a running node to add the group replication settings to my.cnf

7. configure the router

At this point the architecture looks like this:

Let’s configure mysql-router on mysql1, in fact, the router has the capability to bootstrap itself using the cluster’s metadata. It only needs to access one of the member:

[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter
Please enter MySQL password for root: 
WARNING: The MySQL server does not have SSL configured and metadata used by the router may be transmitted unencrypted.

Bootstrapping system MySQL Router instance...
MySQL Router  has now been configured for the InnoDB cluster 'MyInnoDBCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447

X protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

The configuration is done and the router will listen on 4 ports after starting it:

[mysql1 ~]# systemctl start mysqlrouter

8. test phase

Now you can check your cluster, how fast it can process the replication, test some read queries, etc…

As soon as you are happy with your test, you just need to point the application to the router and it’s done ! 😉

9. pointing the application to the new solution

Pointing the application to the router is the only downtime in our story which is very fast.

This is the final architecture:

Enjoy MySQL InnoDB Cluster !

 

 


Howto make MySQL point-in-time recovery faster ?

$
0
0

Before explaining how you can improve the speed for performing point-in-time recovery, let’s recall what is Point-In-Time Recovery and how it’s usually performed.

Point-in-Time Recovery, PTR

Point-In-Time recovery is a technique of restoring your data until a certain point (usually until an event that you would like that has never happened).

For example, a user did a mistake and you would like to recover your data up to that mistake to revert its effects like a drop table or a massive delete.

The usual technique consists to restore the last backup and replay the binary logs up to that unfortunate “event”.

So, as you might have already realized, backups and binary logs are required 😉

The main spread technique to replace those binary logs event is to use the `mysqlbinlog` command. However, depending on your workload, this process can be quick or slow, depending on how much data there is to process. Moreover, `mysqlbinlog` parses and dumps binary logs in a single thread, therefore sequentially. Imagine you do a daily backup at midnight and one of your user inconveniently deletes  some records at 23.59… you have almost a full day of binary logs to process to be able to perform the Point-in-Time Recovery.

Boost binary log processing

Instead of using the `mysqlbinlog` utility to process our MySQL events, in 5.6 and above we have the possibility to use the MySQL server to perform this operation.
In fact, we will use the slave SQL_thread… and as some of you might have realized it already… we could then process those binary logs in parallel using multiple worker threads !

Example

We have a single server running and it’s configured to generate binary logs.

Sysbench is running oltp on 8 tables using 8 threads while we will play on another table not touched by sysbench to make the example easier to follow.

mysql> create table myusers (id int auto_increment primary key, name varchar(20));
Query OK, 0 rows affected (0.47 sec)

mysql> insert into myusers values (0,'lefred'),(0,'kennito'),(0,'dim0');
Query OK, 3 rows affected (0.36 sec)
Records: 3 Duplicates: 0 Warnings: 0

mysql> insert into myusers values (0,'flyer'),(0,'luis'),(0,'nunno');
Query OK, 3 rows affected (0.13 sec)
Records: 3 Duplicates: 0 Warnings: 0

mysql> select * from myusers;
+----+---------+
| id | name    |
+----+---------+
| 1 | lefred   |
| 2 | kennito  |
| 3 | dim0     |
| 4 | flyer    |
| 5 | luis     |
| 6 | nunno    |
+----+---------+
6 rows in set (0.05 sec)

Time for a backup ! Let’s use MEB:

[root@mysql1 mysql]# /opt/mysql/meb-4.1/bin/mysqlbackup --host=127.0.0.1 \
                  --backup-dir=/tmp/backup --user=root backup-and-apply-log

Backup is done, let’s go back to our table (sysbench is running):

mysql> insert into myusers values (0,'alfranio');
Query OK, 1 row affected (0.33 sec)

mysql> insert into myusers values (0,'vitor');
Query OK, 1 row affected (0.09 sec)

Then, oups…

delete from my users…. without where clause !

mysql> delete from myusers;
Query OK, 8 rows affected (0.23 sec)

and we don’t realize it directly, so we continue…

mysql> insert into myusers values (0,'pedro');
Query OK, 1 row affected (0.19 sec)

mysql> insert into myusers values (0,'thiago');
Query OK, 1 row affected (0.16 sec)

mysql> select * from myusers;
+----+--------+
| id | name   |
+----+--------+
| 9  | pedro  |
| 10 | thiago |
+----+--------+
2 rows in set (0.12 sec)

Ouch ! Let’s find what was the problem…

mysql> show master status;
+-------------------+-----------+--------------+------------------+----------------------------------------------+
| File              | Position  | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                            |
+-------------------+-----------+--------------+------------------+----------------------------------------------+
| mysql1-bin.000002 | 232930764 |              |                  | 7766037d-4d1e-11e7-8a51-08002718d305:1-46525 |
+-------------------+-----------+--------------+------------------+----------------------------------------------+
1 row in set (0.12 sec)

mysql> pager grep -A 1 -B 2 'sbtest.myusers' | grep -B 4 Delete
PAGER set to 'grep -A 1 -B 2 'sbtest.myusers' | grep -B 4 Delete'
mysql> show binlog events in 'mysql1-bin.000002';
--
| mysql1-bin.000002 | 195697832 | Gtid        | 1 | 195697904 | SET @@SESSION.GTID_NEXT= '7766037d-4d1e-11e7-8a51-08002718d305:25076' |
| mysql1-bin.000002 | 195697904 | Query       | 1 | 195697978 | BEGIN                                                                 |
| mysql1-bin.000002 | 195697978 | Table_map   | 1 | 195698041 | table_id: 203 (sbtest.myusers)                                        |
| mysql1-bin.000002 | 195698041 | Delete_rows | 1 | 195698168 | table_id: 203 flags: STMT_END_F                                       |
528101 rows in set (1.97 sec)

OK, we know which GTID we should avoid (`7766037d-4d1e-11e7-8a51-08002718d305:25076`).

Time to stop MySQL, copy our binary logs somewhere (I recommend to also stream binary logs to save a live copy) and restore the backup !

[root@mysql1 ~]# systemctl stop mysqld
[root@mysql1 mysql]# mkdir /tmp/binlogs/
[root@mysql1 mysql]# cp mysql1-bin.* /tmp/binlogs/

We are still in /var/lib/mysql 😉

[root@mysql1 mysql]# rm -rf *
[root@mysql1 mysql]# /opt/mysql/meb-4.1/bin/mysqlbackup --backup-dir=/tmp/backup copy-back
[root@mysql1 mysql]# chown -R mysql. *

It’s time to add some required settings in `my.cnf`:

replicate-same-server-id=1
skip-slave-start

We can now restart MySQL and start the PTR:

[root@mysql1 mysql]# systemctl start mysqld
...
mysql> select * from sbtest.myusers;
+----+---------+
| id | name    |
+----+---------+
|  1 | lefred  |
|  2 | kennito |
|  3 | dim0    |
|  4 | flyer   |
|  5 | luis    |
|  6 | nunno   |
+----+---------+
6 rows in set (0.10 sec)

OK we are back at the backup, so it’s time to perform the PTR:

mysql> SET @@GLOBAL.GTID_PURGED='7766037d-4d1e-11e7-8a51-08002718d305:25076';

It’s time to use our binary logs as relay logs, so the first thing to do is to copy those saved earlier and rename them according:

[root@mysql1 mysql]# for i in $(ls /tmp/binlogs/*.0*) 
do  
  ext=$(echo $i | cut -d'.' -f2); 
 cp $i mysql1-relay-bin.$ext; 
done

Make sure that all the new files are referenced in `mysql1-relay-bin.index`:

[root@mysql1 mysql]# ls ./mysql1-relay-bin.0* >mysql1-relay-bin.index
[root@mysql1 mysql]# chown mysql. *relay*
mysql> CHANGE MASTER TO RELAY_LOG_FILE='mysql1-relay-bin.000001', 
       RELAY_LOG_POS=1, MASTER_HOST='dummy';
Query OK, 0 rows affected (4.98 sec)

Performance

Now to benefit from replication’s internals, we will use parallel appliers.

If you don’t have your workload distributed in multiple databases, since 5.7, it’s better to use a different slave parallel type than the default value before starting the `SQL_THREAD` :

mysql> SET GLOBAL SLAVE_PARALLEL_TYPE='LOGICAL_CLOCK';
mysql> SET GLOBAL SLAVE_PARALLEL_WORKERS=8;

Now you can start the replication using the new relay logs:

mysql> START SLAVE SQL_THREAD;

It’s possible to monitor the parallel applying using the following query in performance_schema:

mysql> select * from performance_schema.replication_applier_status_by_worker\G

We can now check that we could rebuild our complete table just ignoring the bad transaction that was a mistake:

mysql> select * from sbtest.myusers;
+----+----------+
| id | name     |
+----+----------+
|  1 | lefred   |
|  2 | kennito  |
|  3 | dim0     |
|  4 | flyer    |
|  5 | luis     |
|  6 | nunno    |
|  7 | alfranio |
|  8 | vitor    |
|  9 | pedro    |
| 10 | thiago   |
+----+----------+

If for any reason you only want to recover until the wrong transaction and nothing after, it’s also possible, this is how to proceed after the backup’s restore (as it’s the same until then).

We need to start mysqld and copy again the binary logs as relay logs. But this time, no need to set any value to GTID_PURGED.

We setup replication as above but this time we start it differently using the keyword UNTIL:

mysql> CHANGE MASTER TO RELAY_LOG_FILE='mysql1-relay-bin.000001', 
       RELAY_LOG_POS=1, MASTER_HOST='dummy';
mysql> set global slave_parallel_type='LOGICAL_CLOCK';
mysql> SET GLOBAL SLAVE_PARALLEL_WORKERS=8;
mysql> START SLAVE SQL_THREAD UNTIL 
       SQL_BEFORE_GTIDS = '7766037d-4d1e-11e7-8a51-08002718d305:25076';

This time, we will replicate until that GTID and then stop the SQL_THREAD.

In both cases, don’t forget after having performed the PTR, to reset all slave information:

mysq> RESET SLAVE ALL;

Conclusion

Of course this is not (yet?) the standard way of doing PTR. Usually, people use mysqlbinlog and replay it through a MySQL client. But this is a nice hack that in some cases may save a lot of time.

MySQL Document Store: creating generated columns like a boss ;)

$
0
0

Last Thursday, I was introducing MySQL Document Store in Ghent, BE at Percona University.

I was explaining how great is this technology and how MySQL can replace your NoSQL database but still provides you all the benefits from a RDBMS.

This is the full presentation:

Then somebody came with a nice question. Let me put first some context:

  • we will create a collection to add people in it
  • we will create a virtual column on the age
  • we will index that column
  • we will query and add records to that collection

Collection creation and add some users

mysql-js> schema = session.getSchema('docstore')

mysql-js> collection = schema.createCollection('users')

mysql-js> collection.add({name: "Descamps", firtname: "Frederic", age: "41"}).execute();
mysql-js> collection.add({name: "Cottyn", firtname: "Yvan", age: "42"}).execute();
mysql-js> collection.add({name: "Buytaert", firtname: "Kris", age: "41"}).execute();

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select * from users;
+------------------------------------------------------------------------------------------------------+----------------------------------+
| doc                                                                                                  | _id                              |
+------------------------------------------------------------------------------------------------------+----------------------------------+
| {"_id": "06ab653c0c58e7117611685b359e77d5", "age": "41", "name": "Descamps", "firtname": "Frederic"} | 06ab653c0c58e7117611685b359e77d5 |
| {"_id": "9828dd6e0c58e7117611685b359e77d5", "age": "41", "name": "Buytaert", "firtname": "Kris"}     | 9828dd6e0c58e7117611685b359e77d5 |
| {"_id": "f24730610c58e7117611685b359e77d5", "age": "42", "name": "Cottyn", "firtname": "Yvan"}       | f24730610c58e7117611685b359e77d5 |
+------------------------------------------------------------------------------------------------------+----------------------------------+
3 rows in set (0.00 sec)

Virtual Column Creation

Usually when we create a virtual generated column, we do like this:

mysql-sql> alter table users add column age varchar(2) 
           GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.age'))) VIRTUAL;
Query OK, 0 rows affected (0.19 sec)

mysql-sql> select _id, age from users;
+----------------------------------+-----+
| _id                              | age |
+----------------------------------+-----+
| 06ab653c0c58e7117611685b359e77d5 | 41  |
| 9828dd6e0c58e7117611685b359e77d5 | 41  |
| f24730610c58e7117611685b359e77d5 | 42  |
+----------------------------------+-----+

The first question I got was related to the data type. As age are integers, could we use it as an integer too ?

The answer is of course yes, but be careful:

mysql-sql> alter table users drop column age;

mysql-sql> alter table users add column age int 
           GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.age'))) VIRTUAL;

mysql-sql> select _id, age from users;
+----------------------------------+-----+
| _id                              | age |
+----------------------------------+-----+
| 06ab653c0c58e7117611685b359e77d5 |  41 |
| 9828dd6e0c58e7117611685b359e77d5 |  41 |
| f24730610c58e7117611685b359e77d5 |  42 |
+----------------------------------+-----+
3 rows in set (0.00 sec)
mysql-sql> show create table users\G
*************************** 1. row ***************************
       Table: users
Create Table: CREATE TABLE `users` (
  `doc` json DEFAULT NULL,
  `_id` varchar(32) 
        GENERATED ALWAYS AS 
         (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL,
  `age` int(11) 
        GENERATED ALWAYS AS 
         (json_unquote(json_extract(`doc`,'$.age'))) VIRTUAL,
  PRIMARY KEY (`_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
1 row in set (0.00 sec)

Ok, so now we are using a virtual column as an integer…. but as you know know, NoSQL does’nt really care, we could add there any value of any type.

But what will MySQL think about it ?

Let’s verify:

mysql-js> collection.add({name: "Vanoverbeke", firtname: "Dimitri", age: "kid"}).execute();
Incorrect integer value: 'kid' for column 'age' at row 1 (MySQL Error 1366)

As you can see above, the virtual column causes an error and we are not able to add such value… but if we really want ? #freedomeverywhere !

But before, let’s remove that generated column and add the record first and then create again the virtual column:

mysql-sql> alter table users drop column age;
mysql-sql> \js
Switching to JavaScript mode...
mysql-js> collection.add({name: "Vanoverbeke", firtname: "Dimitri", age: "kid"}).execute();
Query OK, 1 item affected (0.05 sec)
mysql-sql> alter table users 
           add column age int 
           GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.age'))) VIRTUAL;

This works like a charm (see note at the end of the post)

mysql-sql> select _id, age from users;
+----------------------------------+-----+
| _id                              | age |
+----------------------------------+-----+
| 06ab653c0c58e7117611685b359e77d5 |  41 |
| 9828dd6e0c58e7117611685b359e77d5 |  41 |
| ca4f7fba1058e7117611685b359e77d5 |   0 |
| f24730610c58e7117611685b359e77d5 |  42 |
+----------------------------------+-----+
4 rows in set (0.00 sec)

But we can’t add another one anyway:

mysql-js> collection.add({name: "Gryp", firtname: "Kenny", age: "teenager"}).execute();
Incorrect integer value: 'teenager' for column 'age' at row 1 (MySQL Error 1366)

So let’s remove the column and recreate it with an index on it too:

mysql-sql> alter table users drop column age;
mysql-sql> alter table users add column age int 
           GENERATED ALWAYS AS 
           (json_unquote(json_extract(`doc`,'$.age'))) VIRTUAL, add index age_idx(age) ;
ERROR: 1366: Incorrect integer value: 'kid' for column 'age' at row 1

Then it doesn’t work anymore, even with data already present in the collection.

CAST( )

OK, we should find another solution. Let’s try with the CAST() function that will return 0 if it cannot find a integer in the value:

mysql-sql> SELECT CAST("kid" AS UNSIGNED);
+-------------------------+
| CAST("kid" AS UNSIGNED) |
+-------------------------+
| 0                       |
+-------------------------+
1 row in set, 1 warning (0.00 sec)

This seems to be what we are looking for, let’s use it:

mysql-sql> alter table users add column age int 
           GENERATED ALWAYS AS 
           (cast((json_unquote(json_extract(`doc`,'$.age'))) AS SIGNED)) 
           VIRTUAL, add index age_idx(age) ;
ERROR: 1292: Truncated incorrect INTEGER value: 'kid'

Not the same error, but it doesn’t work.

You might then think that instead of having a VIRTUAL column, we should STORE it and index it…

Unfortunately this is neither an option:

mysql-sql> alter table users add column age int 
           GENERATED ALWAYS AS 
           (cast((json_unquote(json_extract(`doc`,'$.age'))) AS SIGNED)) STORED;
ERROR: 1292: Truncated incorrect INTEGER value: 'kid'

And so ?

So the second question was if we type our fields and we want to index them, as in json they are not typed, what will happen ?

If you are not sure the same type will be used in the document for the same attribute, as you can see, it’s not working very well. Such check must be done in the application that uses MySQL Document Store or you will face some problems as described above.

But of course there is a solution (if there is no solution, there is no problem, isn’t it ?)

Solution

Instead of using the CAST() function we will create our generated column like a boss and use IF with an old trick of adding 0 (IF( ) with CAST() would also work):

mysql-sql> alter table users add column age int 
           GENERATED ALWAYS 
           AS (IF(doc->>"$.age"+0=0,NULL,doc->>"$.age")) VIRTUAL, WITH VALIDATION;
Query OK, 4 rows affected (0.81 sec)

I’ve also added WITH VALIDATION. This means that the ALTER TABLE copies the table and if an out-of-range or any other error occurs, the statement fails. So you are familiar with this too. The default is WITHOUT VALIDATION and this is why one of our previous statement worked like a charm.

Let’s have a look at our users:

mysql-sql> select * from users; 
+---------------------------------------------------------------------------------------------------------+----------------------------------+------+
| doc                                                                                                     | _id                              | age  |
+---------------------------------------------------------------------------------------------------------+----------------------------------+------+
| {"_id": "06ab653c0c58e7117611685b359e77d5", "age": "41", "name": "Descamps", "firtname": "Frederic"}    | 06ab653c0c58e7117611685b359e77d5 | 41   |
| {"_id": "9828dd6e0c58e7117611685b359e77d5", "age": "41", "name": "Buytaert", "firtname": "Kris"}        | 9828dd6e0c58e7117611685b359e77d5 | 41   |
| {"_id": "c4f986214e58e711434d685b359e77d5", "age": "kid", "name": "Vanoverbeke", "firtname": "Dimitri"} | c4f986214e58e711434d685b359e77d5 | null |
| {"_id": "f24730610c58e7117611685b359e77d5", "age": "42", "name": "Cottyn", "firtname": "Yvan"}          | f24730610c58e7117611685b359e77d5 | 42   |
+---------------------------------------------------------------------------------------------------------+----------------------------------+------+

Now we can try to add the index:

mysql-sql> alter table users add index age_idx(age);

And we can even add new data having the age not being the expected integer:

mysql-js> collection.add({name: "Gryp", firtname: "Kenny", age: "teenager"}).execute();

As you can see, it requires some extra effort if you want to type JSON attributes in MySQL’s virtual columns but this allows you to mix both worlds, NoSQL and SQL, very easily using one single platform !

A summer with the MySQL Community Team !

$
0
0
The MySQL Community team will be supporting the following events during the summer and we will be present at some of them ! Please come to visit us !

Northeast PHP

August 9-11, 2017, Charlottetown, PEI Canada
 
We are happy to invite you to Northeast PHP where MySQL Community team is having a booth. Please find David Stokes, the MySQL Community Manager at MySQL booth in expo area. Dave also submitted a talk on “JSON, Replication, and database programming” which we hope will be accepted. Please watch the conference agenda for further updates.
 
We are looking forward to talking to you there!
More information / registration: http://2017.northeastphp.org/

UbuCon LA 

Lima, Peru, August 18-19, 2017
 
MySQL Community team is supporting this event as Platinum sponsor.
More information about the event & registration: http://ubucon.org/en/events/ubucon-latin-america/

Open Source Conference Hokkaido,

Hokkaido, Japan, July 14-15, 2017
We are happy to invite you to the next Open Source Conference in Japan, this time in Hokkaido. Local MySQL team together with MyNA (MySQL Nippon Association) are going to represent MySQL at this event. Do not miss the dedicated MySQL session &  opportunity to talk with our experts at the MySQL booth. This time we have really cute MySQL presents! We are looking forward to talking to you!
More information & registration: https://www.ospn.jp/osc2017-do/

Open Source Conference Kyoto

Kyoto, Japan, August 4-5, 2017
The other Open Source Conference in Japan which MySQL team is going to attend as Gold sponsor is Open Source Conference Kyoto, Japan. Same as in Hokkaido also here you can find our MySQL team together with MyNA (MySQL Nippon Association)  representatives at MySQL booth and listen the MySQL dedicated session. Do not miss the opportunity to talk to our booth staff this time with the cool MySQL branded presents! We are looking forward to meeting  you there!
More information & registration: https://www.ospn.jp/osc2017-kyoto/

COSCUP

August 5-6, 2017, Taipei, Taiwan
As a tradition also this year you can find MySQL team at the conference for Open Source Coders, Users & Promoters (COSCUP). We are again this year as Gold sponsor and newly this year MySQL got a whole day Open Source Database Track. As part of this track there are 7 MySQL talks where 5 of the speakers are from Oracle. Please find some of the topics below:
  • MySQL Server 8.0 by Shinya Sugiyama, the MySQL Master Principal Sales Consultant, Oracle
  • New Features in MySQL 5.7 Optimizer by Amit Bhattacharya, the Senior Software Development Manager, Oracle
  • A good way to use Redis with MySQL by Yuji Otani, the CTO of SKYDISC, Japan
  • MySQL InnoDB Cluster by Frederic Descamps (me!), the MySQL Community Manager, Oracle
  • MySQL InnoDB Cluster and MySQL Connector Workshop by Ivan Ma, MySQL Sales Consultant, Oracle & HK MySQL User Group Leader
  • Sponsored Commercial talk: Database Trend support Next Generation Web Application by Sanjay Manwani, the MySQL Development Director, Oracle
  • … and more… for more details check the COSCUP website…
Please find us at the MySQL booth in the expo area. We are looking forward to talking to you there!
More information & registration: http://coscup.org/2017-landingpage/

FrOSCon

August 19-20, 2017, Sankt Augustin, Germany
This year again we are very happy to invite you to the Free Open Source Conference (FrOSCon) which takes place in Sankt Augustin, Germany. You can find our MySQL representative at MySQL booth in the expo area as well as MySQL talk in the program. This year there is going to be a presentation “MySQ L5.7 – InnoDB Cluster [HA built in]” run by Carsten Thalheimer, the Senior MySQL Sales Consultant. Do not miss the opportunity to meet & talk to us there as well as check the program for the MySQL talk.
More information & registration: https://www.froscon.de

MySQL Group Replication: understanding Flow Control

$
0
0

When using MySQL Group Replication, it’s possible that some members are lagging behind the group. Due to load, hardware limitation, etc… This lag can become problematic to keep good certification behavior regarding performance and keep the possible certification failure as low as possible. Bigger is the applying queue bigger is the risk to have conflicts with those not yet applied transactions (this is problematic on Multi-Primary Groups).

Galera users are already familiar with such concept. MySQL Group Replication’s implementation is different 2 main aspects:

  • the Group is never totally stalled
  • the node having issues doesn’t send flow control messages to the rest of the group asking for slowing down

In fact, every member of the Group send some statistics about its queues (applier queue and certification queue) to the other members. Then every node decide to slow down or not if they realize that one node reached the threshold for one of the queue:

group_replication_flow_control_applier_threshold   (default is 25000)
group_replication_flow_control_certifier_threshold (default is 25000)

So when group_replication_flow_control_mode is set to QUOTA on the node seeing that one of the other members of the cluster is lagging behind (threshold reached), it will throttle the write operations to the the minimum quota. This quota is calculated based on the number of transactions applied in the last second, and then it is reduced below that by subtracting the “over the quota” messages from the last period.

This mean that as contrary of Galera where the threshold is decided on the node being slow, for us in MySQL Group Replication, the node writing a transaction check its threshold flow control values and compare them to the statistics from the other nodes to decide to throttle or not.

You can find more information about Group Replication Flow Control reading Vitor’s article Zooming-in on Group Replication Performance

 

BuzzConf is looking for technologists, innovators and entertainers!

$
0
0
MySQL is proud to announce that we are going to support and actively attend a very unique event BuzzConf in Australia. It’s hold on December 1-3, 2017 in Phoenix Park, AU.

BuzzConf is a family-friendly technology festival with a really unique atmosphere – all participants and presenters spend the weekend together in country Victoria, learning and playing with tech during the day and being entertained by live music all night!
Whether you want to give a talk, run a workshop, lead a kids track session, or entertain the masses during the festival, BuzzConf wants to hear from you!

The Call for Presenters runs until August 4, so there’s not long to get an idea in – but plenty of time to nut out all the final details should an idea from your community be selected for the program.

Find out what makes BuzzConf different at https://buzzconf.io or get inspiration from last year’s submissions at https://archive.buzzconf.io/festival-2016

You can find all the details and submit an idea at https://buzzconf.io/call-for-presenters

Viewing all 411 articles
Browse latest View live