Rechercher dans le manuel MySQL
18.2.1.6 Adding Instances to the Group
At this point, the group has one member in it, server s1, which has some data in it. It is now time to expand the group by adding the other two servers configured previously.
18.2.1.6.1 Adding a Second Instance
In order to add a second instance, server s2, first create the
configuration file for it. The configuration is similar to the
one used for server s1, except for things such as the
server_id
. These different
lines are highlighted in the listing below.
[mysqld]
#
# Disable other storage engines
#
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
#
# Replication configuration parameters
#
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
#
# Group Replication configuration
#
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "s2:33061"
group_replication_group_seeds= "s1:33061,s2:33061,s3:33061"
group_replication_bootstrap_group= off
Similar to the procedure for server s1, with the option file in place you launch the server. Then configure the distributed recovery credentials as follows. The commands are the same as used when setting up server s1 as the user is shared within the group. This member needs to have the same replication user configured in Section 18.2.1.3, “User Credentials”. If you are relying on distributed recovery to configure the user on all members, when s2 connects to the seed s1 the replication user is replicated or cloned to s1. If you did not have binary logging enabled when you configured the user credentials on s1, and a remote cloning operation is not used for state transfer, you must create the replication user on s2. In this case, connect to s2 and issue:
- FOR CHANNEL 'group_replication_recovery';
If you are using the caching SHA-2 authentication plugin, the default in MySQL 8, see Using Group Replication and the Caching SHA-2 User Credentials Plugin.
If necessary, install the Group Replication plugin, see Section 18.2.1.4, “Launching Group Replication”.
Start Group Replication and s2 starts the process of joining the group.
Unlike the previous steps that were the same as those executed
on s1, here there is a difference in that you do
not need to boostrap the group because
the group already exiists. In other words on s2
group_replication_bootstrap_group
is set to off, and you do not issue SET GLOBAL
group_replication_bootstrap_group=ON;
before
starting Group Replication, because the group has already been
created and bootstrapped by server s1. At this point server s2
only needs to be added to the already existing group.
When Group Replication starts successfully and the server
joins the group it checks the
super_read_only
variable.
By setting super_read_only
to ON in the member's configuration file, you can
ensure that servers which fail when starting Group
Replication for any reason do not accept transactions. If
the server should join the group as read-write instance, for
example as the primary in a single-primary group or as a
member of a multi-primary group, when the
super_read_only
variable is
set to ON then it is set to OFF upon joining the group.
Checking the
performance_schema.replication_group_members
table again shows that there are now two
ONLINE servers in the group.
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
- | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
- | group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | s1 | 3306 | ONLINE |
- | group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | s2 | 3306 | ONLINE |
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
When s2 attempted to join the group, Section 18.4.3, “Distributed Recovery” ensured that s2 applied the same transactions which s1 had applied. Once this process completed, s2 could join the group as a member, and at this point it is marked as ONLINE. In other words it must have already caught up with server s1 automatically. Once s2 is ONLINE, it then begins to process transactions with the group. Verify that s2 has indeed synchronized with server s1 as follows.
- +-----------------+
- +-----------------+
- | test |
- +-----------------+
- +----+------+
- | c1 | c2 |
- +----+------+
- | 1 | Luis |
- +----+------+
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
- | Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
- | binlog.000001 | 123 | Previous_gtids | 2 | 150 | |
- | binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
- | binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
- | binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
- | binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
As seen above, the second server has been added to the group and it has replicated the changes from server s1 automatically. In other words, the transactions applied on s1 up to the point in time that s2 joined the group have been replicated to s2.
Adding additional instances to the group is essentially the same sequence of steps as adding the second server, except that the configuration has to be changed as it had to be for server s2. To summarise the required commands:
1. Create the configuration file
[mysqld]
#
# Disable other storage engines
#
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
#
# Replication configuration parameters
#
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
#
# Group Replication configuration
#
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "s3:33061"
group_replication_group_seeds= "s1:33061,s2:33061,s3:33061"
group_replication_bootstrap_group= off
2. Start the server and connect to it. Configure the distributed recovery credentials for the group_replication_recovery channel.
- FLUSH PRIVILEGES;
- FOR CHANNEL 'group_replication_recovery';
4. Install the Group Replication plugin and start it.
At this point server s3 is booted and running, has joined the
group and caught up with the other servers in the group.
Consulting the
performance_schema.replication_group_members
table again confirms this is the case.
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
- | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
- | group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | s1 | 3306 | ONLINE |
- | group_replication_applier | 7eb217ff-6df3-11e6-966c-00212844f856 | s3 | 3306 | ONLINE |
- | group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | s2 | 3306 | ONLINE |
- +---------------------------+--------------------------------------+-------------+-------------+---------------+
Issuing this same query on server s2 or server s1 yields the same result. Also, you can verify that server s3 has caught up:
- +-----------------+
- +-----------------+
- | test |
- +-----------------+
- +----+------+
- | c1 | c2 |
- +----+------+
- | 1 | Luis |
- +----+------+
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
- | Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
- | binlog.000001 | 123 | Previous_gtids | 3 | 150 | |
- | binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
- | binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
- | binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
- | binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
- | binlog.000001 | 1446 | View_change | 1 | 1585 | view_id=14724832985483517:3 |
- +---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
Traduction non disponible
Le manuel MySQL n'est pas encore traduit en français sur l'infobrol. Seule la version anglaise est disponible pour l'instant.
Document créé le 26/06/2006, dernière modification le 26/10/2018
Source du document imprimé : https://www.gaudry.be/mysql-rf-group-replication-adding-instances.html
L'infobrol est un site personnel dont le contenu n'engage que moi. Le texte est mis à disposition sous licence CreativeCommons(BY-NC-SA). Plus d'info sur les conditions d'utilisation et sur l'auteur.
Références
Ces références et liens indiquent des documents consultés lors de la rédaction de cette page, ou qui peuvent apporter un complément d'information, mais les auteurs de ces sources ne peuvent être tenus responsables du contenu de cette page.
L'auteur de ce site est seul responsable de la manière dont sont présentés ici les différents concepts, et des libertés qui sont prises avec les ouvrages de référence. N'oubliez pas que vous devez croiser les informations de sources multiples afin de diminuer les risques d'erreurs.