Rechercher dans le manuel MySQL
22.6.3 Known Issues in NDB Cluster Replication
This section discusses known problems or issues when using replication with NDB Cluster 8.0.
Loss of master-slave connection.
A loss of connection can occur either between the replication
master SQL node and the replication slave SQL node, or between
the replication master SQL node and the data nodes in the master
cluster. In the latter case, this can occur not only as a result
of loss of physical connection (for example, a broken network
cable), but due to the overflow of data node event buffers; if
the SQL node is too slow to respond, it may be dropped by the
cluster (this is controllable to some degree by adjusting the
MaxBufferedEpochs
and
TimeBetweenEpochs
configuration parameters). If this occurs, it is
entirely possible for new data to be inserted into the master
cluster without being recorded in the replication master's
binary log. For this reason, to guarantee high
availability, it is extremely important to maintain a backup
replication channel, to monitor the primary channel, and to fail
over to the secondary replication channel when necessary to keep
the slave cluster synchronized with the master. NDB Cluster is
not designed to perform such monitoring on its own; for this, an
external application is required.
The replication master issues a “gap” event when
connecting or reconnecting to the master cluster. (A gap event is
a type of “incident event,” which indicates an
incident that occurs that affects the contents of the database but
that cannot easily be represented as a set of changes. Examples of
incidents are server crashes, database resynchronization, (some)
software updates, and (some) hardware changes.) When the slave
encounters a gap in the replication log, it stops with an error
message. This message is available in the output of
SHOW SLAVE STATUS
, and indicates
that the SQL thread has stopped due to an incident registered in
the replication stream, and that manual intervention is required.
See Section 22.6.8, “Implementing Failover with NDB Cluster Replication”, for more
information about what to do in such circumstances.
Because NDB Cluster is not designed on its own to monitor replication status or provide failover, if high availability is a requirement for the slave server or cluster, then you must set up multiple replication lines, monitor the master mysqld on the primary replication line, and be prepared fail over to a secondary line if and as necessary. This must be done manually, or possibly by means of a third-party application. For information about implementing this type of setup, see Section 22.6.7, “Using Two Replication Channels for NDB Cluster Replication”, and Section 22.6.8, “Implementing Failover with NDB Cluster Replication”.
However, if you are replicating from a standalone MySQL server to an NDB Cluster, one channel is usually sufficient.
Circular replication. NDB Cluster Replication supports circular replication, as shown in the next example. The replication setup involves three NDB Clusters numbered 1, 2, and 3, in which Cluster 1 acts as the replication master for Cluster 2, Cluster 2 acts as the master for Cluster 3, and Cluster 3 acts as the master for Cluster 1, thus completing the circle. Each NDB Cluster has two SQL nodes, with SQL nodes A and B belonging to Cluster 1, SQL nodes C and D belonging to Cluster 2, and SQL nodes E and F belonging to Cluster 3.
Circular replication using these clusters is supported as long as the following conditions are met:
The SQL nodes on all masters and slaves are the same
All SQL nodes acting as replication masters and slaves are started using the
--log-slave-updates
option
This type of circular replication setup is shown in the following diagram:
In this scenario, SQL node A in Cluster 1 replicates to SQL node C in Cluster 2; SQL node C replicates to SQL node E in Cluster 3; SQL node E replicates to SQL node A. In other words, the replication line (indicated by the curved arrows in the diagram) directly connects all SQL nodes used as replication masters and slaves.
It should also be possible to set up circular replication in which not all master SQL nodes are also slaves, as shown here:
In this case, different SQL nodes in each cluster are used as
replication masters and slaves. However, you must
not start any of the SQL nodes using
--log-slave-updates
. This type of
circular replication scheme for NDB Cluster, in which the line of
replication (again indicated by the curved arrows in the diagram)
is discontinuous, should be possible, but it should be noted that
it has not yet been thoroughly tested and must therefore still be
considered experimental.
The NDB
storage engine uses
idempotent execution mode,
which suppresses duplicate-key and other errors that otherwise
break circular replication of NDB Cluster. This is equivalent to
setting the global
slave_exec_mode
system variable
to IDEMPOTENT
, although this is not necessary
in NDB Cluster replication, since NDB Cluster sets this variable
automatically and ignores any attempts to set it explicitly.
NDB Cluster replication and primary keys.
In the event of a node failure, errors in replication of
NDB
tables without primary keys can
still occur, due to the possibility of duplicate rows being
inserted in such cases. For this reason, it is highly
recommended that all NDB
tables
being replicated have primary keys.
NDB Cluster Replication and Unique Keys.
In older versions of NDB Cluster, operations that updated values
of unique key columns of NDB
tables
could result in duplicate-key errors when replicated. This issue
is solved for replication between
NDB
tables by deferring unique key
checks until after all table row updates have been performed.
Deferring constraints in this way is currently supported only by
NDB
. Thus, updates of unique keys
when replicating from NDB
to a
different storage engine such as
MyISAM
or
InnoDB
are still not supported.
The problem encountered when replicating without deferred checking
of unique key updates can be illustrated using
NDB
table such as
t
, is created and populated on the master (and
replicated to a slave that does not support deferred unique key
updates) as shown here:
- c INT,
The following UPDATE
statement on
t
succeeded on the master, since the rows
affected are processed in the order determined by the
ORDER BY
option, performed over the entire
table:
However, the same statement failed with a duplicate key error or other constraint violation on the slave, because the ordering of the row updates was done for one partition at a time, rather than for the table as a whole.
Every NDB
table is implicitly
partitioned by key when it is created. See
Section 23.2.5, “KEY Partitioning”, for more information.
GTIDs not supported.
Replication using global transaction IDs is not compatible with
the NDB
storage engine, and is not supported.
Enabling GTIDs is likely to cause NDB Cluster Replication to
fail.
Multithreaded slaves not supported.
NDB Cluster does not support multithreaded slaves, and setting
related system variables such as
slave_parallel_workers
,
slave_checkpoint_group
, and
slave_checkpoint_group
(or the
equivalent mysqld startup options) has no
effect.
This is because the slave may not be able to separate transactions
occurring in one database from those in another if they are
written within the same epoch. In addition, every transaction
handled by the NDB
storage engine
involves at least two databases—the target database and the
mysql
system database—due to the
requirement for updating the
mysql.ndb_apply_status
table (see
Section 22.6.4, “NDB Cluster Replication Schema and Tables”). This in turn
breaks the requirement for multithreading that the transaction is
specific to a given database.
Restarting with --initial.
Restarting the cluster with the
--initial
option causes the
sequence of GCI and epoch numbers to start over from
0
. (This is generally true of NDB Cluster and
not limited to replication scenarios involving Cluster.) The
MySQL servers involved in replication should in this case be
restarted. After this, you should use the
RESET MASTER
and
RESET SLAVE
statements to clear
the invalid ndb_binlog_index
and
ndb_apply_status
tables, respectively.
Replication from NDB to other storage engines.
It is possible to replicate an NDB
table on the master to a table using a different storage engine
on the slave, taking into account the restrictions listed here:
Multi-master and circular replication are not supported (tables on both the master and the slave must use the
NDB
storage engine for this to work).Using a storage engine which does not perform binary logging for slave tables requires special handling.
Use of a nontransactional storage engine for slave tables also requires special handling.
The master mysqld must be started with
--ndb-log-update-as-write=0
or--ndb-log-update-as-write=OFF
.
The next few paragraphs provide additional information about each of the issues just described.
Multiple masters not supported when replicating NDB to other storage
engines.
For replication from NDB
to a
different storage engine, the relationship between the two
databases must be a simple master-slave one. This means that
circular or master-master replication is not supported between
NDB Cluster and other storage engines.
In addition, it is not possible to configure more than one
replication channel when replicating between
NDB
and a different storage engine.
(However, an NDB Cluster database can
simultaneously replicate to multiple slave NDB Cluster databases.)
If the master uses NDB
tables, it is
still possible to have more than one MySQL Server maintain a
binary log of all changes; however, for the slave to change
masters (fail over), the new master-slave relationship must be
explicitly defined on the slave.
Replicating NDB to a slave storage engine that does not perform binary logging. If you attempt to replicate from an NDB Cluster to a slave that uses a storage engine that does not handle its own binary logging, the replication process aborts with the error Binary logging not possible ... Statement cannot be written atomically since more than one engine involved and at least one engine is self-logging (Error 1595). It is possible to work around this issue in one of the following ways:
Turn off binary logging on the slave. This can be accomplished by setting
sql_log_bin = 0
.Change the storage engine used for the mysql.ndb_apply_status table. Causing this table to use an engine that does not handle its own binary logging can also eliminate the conflict. This can be done by issuing a statement such as
ALTER TABLE mysql.ndb_apply_status ENGINE=MyISAM
on the slave. It is safe to do this when using a non-NDB
storage engine on the slave, since you do not then need to worry about keeping multiple slave SQL nodes synchronized.Filter out changes to the mysql.ndb_apply_status table on the slave. This can be done by starting the slave SQL node with
--replicate-ignore-table=mysql.ndb_apply_status
. If you need for other tables to be ignored by replication, you might wish to use an appropriate--replicate-wild-ignore-table
option instead.
You should not disable replication or
binary logging of mysql.ndb_apply_status
or
change the storage engine used for this table when replicating
from one NDB Cluster to another. See
Replication and binary log filtering rules with replication between NDB
Clusters,
for details.
Replication from NDB to a nontransactional storage engine.
When replicating from NDB
to a
nontransactional storage engine such as
MyISAM
, you may encounter
unnecessary duplicate key errors when replicating
INSERT ...
ON DUPLICATE KEY UPDATE
statements. You can suppress
these by using
--ndb-log-update-as-write=0
,
which forces updates to be logged as writes (rather than as
updates).
Replication and binary log filtering rules with replication between NDB
Clusters.
If you are using any of the options
--replicate-do-*
,
--replicate-ignore-*
,
--binlog-do-db
, or
--binlog-ignore-db
to filter
databases or tables being replicated, care must be taken not to
block replication or binary logging of the
mysql.ndb_apply_status
, which is required for
replication between NDB Clusters to operate properly. In
particular, you must keep in mind the following:
Using
--replicate-do-db=
(and no otherdb_name
--replicate-do-*
or--replicate-ignore-*
options) means that only tables in databasedb_name
are replicated. In this case, you should also use--replicate-do-db=mysql
,--binlog-do-db=mysql
, or--replicate-do-table=mysql.ndb_apply_status
to ensure thatmysql.ndb_apply_status
is populated on slaves.Using
--binlog-do-db=
(and no otherdb_name
--binlog-do-db
options) means that changes only to tables in databasedb_name
are written to the binary log. In this case, you should also use--replicate-do-db=mysql
,--binlog-do-db=mysql
, or--replicate-do-table=mysql.ndb_apply_status
to ensure thatmysql.ndb_apply_status
is populated on slaves.Using
--replicate-ignore-db=mysql
means that no tables in themysql
database are replicated. In this case, you should also use--replicate-do-table=mysql.ndb_apply_status
to ensure thatmysql.ndb_apply_status
is replicated.Using
--binlog-ignore-db=mysql
means that no changes to tables in themysql
database are written to the binary log. In this case, you should also use--replicate-do-table=mysql.ndb_apply_status
to ensure thatmysql.ndb_apply_status
is replicated.
You should also remember that each replication rule requires the following:
Its own
--replicate-do-*
or--replicate-ignore-*
option, and that multiple rules cannot be expressed in a single replication filtering option. For information about these rules, see Section 17.1.6, “Replication and Binary Logging Options and Variables”.Its own
--binlog-do-db
or--binlog-ignore-db
option, and that multiple rules cannot be expressed in a single binary log filtering option. For information about these rules, see Section 5.4.4, “The Binary Log”.
If you are replicating an NDB Cluster to a slave that uses a
storage engine other than NDB
, the
considerations just given previously may not apply, as discussed
elsewhere in this section.
NDB Cluster Replication and IPv6. Currently, the NDB API and MGM API do not support IPv6. However, MySQL Servers—including those acting as SQL nodes in an NDB Cluster —can use IPv6 to contact other MySQL Servers. This means that you can replicate between NDB Clusters using IPv6 to connect the master and slave SQL nodes as shown by the dotted arrow in the following diagram:
However, all connections originating within the NDB Cluster —represented in the preceding diagram by solid arrows—must use IPv4. In other words, all NDB Cluster data nodes, management servers, and management clients must be accessible from one another using IPv4. In addition, SQL nodes must use IPv4 to communicate with the cluster.
Since there is currently no support in the NDB and MGM APIs for IPv6, any applications written using these APIs must also make all connections using IPv4.
Attribute promotion and demotion.
NDB Cluster Replication includes support for attribute promotion
and demotion. The implementation of the latter distinguishes
between lossy and non-lossy type conversions, and their use on
the slave can be controlled by setting the
slave_type_conversions
global
server system variable.
For more information about attribute promotion and demotion in NDB Cluster, see Row-based replication: attribute promotion and demotion.
NDB
, unlike InnoDB
or MyISAM
, does not write changes to
virtual columns to the binary log; however, this has no
detrimental effects on NDB Cluster Replication or replication
between NDB
and other storage engines. Changes
to stored generated columns are logged.
Deutsche Übersetzung
Sie haben gebeten, diese Seite auf Deutsch zu besuchen. Momentan ist nur die Oberfläche übersetzt, aber noch nicht der gesamte Inhalt.Wenn Sie mir bei Übersetzungen helfen wollen, ist Ihr Beitrag willkommen. Alles, was Sie tun müssen, ist, sich auf der Website zu registrieren und mir eine Nachricht zu schicken, in der Sie gebeten werden, Sie der Gruppe der Übersetzer hinzuzufügen, die Ihnen die Möglichkeit gibt, die gewünschten Seiten zu übersetzen. Ein Link am Ende jeder übersetzten Seite zeigt an, dass Sie der Übersetzer sind und einen Link zu Ihrem Profil haben.
Vielen Dank im Voraus.
Dokument erstellt 26/06/2006, zuletzt geändert 26/10/2018
Quelle des gedruckten Dokuments:https://www.gaudry.be/de/mysql-rf-mysql-cluster-replication-issues.html
Die Infobro ist eine persönliche Seite, deren Inhalt in meiner alleinigen Verantwortung liegt. Der Text ist unter der CreativeCommons-Lizenz (BY-NC-SA) verfügbar. Weitere Informationen auf die Nutzungsbedingungen und dem Autor.
Referenzen
Diese Verweise und Links verweisen auf Dokumente, die während des Schreibens dieser Seite konsultiert wurden, oder die zusätzliche Informationen liefern können, aber die Autoren dieser Quellen können nicht für den Inhalt dieser Seite verantwortlich gemacht werden.
Der Autor Diese Website ist allein dafür verantwortlich, wie die verschiedenen Konzepte und Freiheiten, die mit den Nachschlagewerken gemacht werden, hier dargestellt werden. Denken Sie daran, dass Sie mehrere Quellinformationen austauschen müssen, um das Risiko von Fehlern zu reduzieren.