Rechercher dans le manuel MySQL
21.4 Working with InnoDB Cluster
This section explains how to work with InnoDB cluster, and how to handle common administration tasks.
Before creating a production deployment from server instances
you need to check that MySQL on each instance is correctly
configured. In addition to
dba.configureInstance()
, which checks the
configuration as part of configuring an instance, you can use
the dba.checkInstanceConfiguration()
function. This ensures that the instance satisfies the
Section 21.2.2, “InnoDB Cluster Requirements” without
changing any configuration on the instance. This does not check
any data that is on the instance, see
Checking Instance State for more information. The
following demonstrates issuing this in a running MySQL Shell:
mysql-js> dba.checkInstanceConfiguration('ic@ic-1:3306')
Please provide the password for 'ic@ic-1:3306': ***
Validating MySQL instance at ic-1:3306 for use in an InnoDB cluster...
This instance reports its own address as ic-1
Clients and other cluster members will communicate with it through this address by default.
If this is not correct, the report_host MySQL system variable should be changed.
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
Checking instance configuration...
Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable |
| enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server |
| gtid_mode | OFF | ON | Update read-only variable and restart the server |
| server_id | 1 | | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
Please use the dba.configureInstance() command to repair these issues.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "1",
"option": "server_id",
"required": ""
}
],
"status": "error"
}
Repeat this process for each server instance that you plan to
use as part of your cluster. The report generated after running
dba.checkInstanceConfiguration()
provides
information about any configuration changes required before you
can proceed. The action
field in the
config_error
section of the report tells you
whether MySQL on the instance requires a restart to detect any
change made to the configuration file.
Instances which do not support persisting configuration changes
automatically (see
Persisting Settings)
require you to connect to the server, run MySQL Shell, connect
to the instance locally and issue
dba.configureLocalInstance()
. This enables
MySQL Shell to modify the instance's option file after running
the following commands against a remote instance:
dba.configureInstance()
dba.createCluster()
Cluster
.addInstance()Cluster
.removeInstance()Cluster
.rejoinInstance()
Failing to persist configuration changes to an instance's option file can result in the instance not rejoining the cluster after the next restart.
The recommended method is to log in to the remote machine, for
example using SSH, run MySQL Shell as the root user and then
connect to the local MySQL server. For example, use the
--uri
option to connect to the
local instance
:
shell> sudo -i mysqlsh --uri=instance
Alternatively use the \connect
command to log
in to the local instance. Then issue
dba.configureInstance(
,
where instance
)instance
is the connection
information to the local instance, to persist any changes made
to the local instance's option file.
mysql-js> dba.configureLocalInstance('ic@ic-2:3306')
Repeat this process for each instance in the cluster which does not support persisting configuration changes automatically. For example if you add 2 instances to a cluster which do not support persisting configuration changes automatically, you must connect to each server and persist the configuration changes required for InnoDB cluster before the instance restarts. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB cluster metadata accordingly for each instance in the cluster.
When you create a cluster using
dba.createCluster()
, the operation returns a
Cluster object which can be assigned to a variable. You use this
object to work with the cluster, for example to add instances or
check the cluster's status. If you want to retrieve a cluster
again at a later date, for example after restarting
MySQL Shell, use the
dba.getCluster([
function. For example:
name
],[options
])
mysql-js> var cluster1 = dba.getCluster()
If you do not specify a cluster name
then the default cluster is returned.
By default MySQL Shell attempts to connect to the primary
instance of the cluster when you use
dba.getCluster()
. Set the
connectToPrimary
option to configure this
behavior. If connectToPrimary
is
true
and the active global MySQL Shell
session is not to a primary instance, the cluster is queried for
the primary member and the cluster object connects to it. If
there is no quorum in the cluster, the operation fails. If
connectToPrimary
is false
,
the cluster object uses the active session, in other words the
same instance as the MySQL Shell's current global session. If
connectToPrimary
is not specified,
MySQL Shell treats connectToPrimary
as
true
, and falls back to
connectToPrimary
being
false
.
To force connecting to a secondary when getting a cluster,
establish a connection to the secondary member of the cluster
and use the connectToPrimary
option by
issuing:
mysql-js> shell.connect(secondary_member)
mysql-js> var cluster1 = dba.getCluster(testCluster, {connectToPrimary:false})
Remember that secondary instances have
super_read_only=ON
, so you
cannot write changes to them.
To get information about the structure of the InnoDB cluster
itself, use the
function:
Cluster
.describe()
mysql-js> cluster.describe();
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"topology": [
{
"address": "ic-1:3306",
"label": "ic-1:3306",
"role": "HA"
},
{
"address": "ic-2:3306",
"label": "ic-1:2306",
"role": "HA"
},
{
"address": "ic-3:3306",
"label": "ic-3:3306",
"role": "HA"
}
]
}
}
The output from this function shows the structure of the
InnoDB cluster including all of its configuration information,
and so on. The address, label and role values match those
described at Checking a cluster's Status with
.
Cluster
.status()
Cluster objects provide the status()
method
that enables you to check how a cluster is running. Before you
can check the status of the InnoDB cluster, you need to get a
reference to the InnoDB cluster object by connecting to any of
its instances. However, if you want to make changes to the
configuration of the cluster, you must connect to a "R/W"
instance. Issuing status()
retrieves the
status of the cluster based on the view of the cluster which the
server instance you are connected to is aware of and outputs a
status report.
The instance's state in the cluster directly influences the
information provided in the status report. Therefore ensure
the instance you are connected to has a status of
ONLINE
.
For information about how the InnoDB cluster is running, use
the cluster's status()
method:
mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
"clusterName": "testcluster",
"defaultReplicaSet": {
"name": "default",
"primary": "ic-1:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"ic-1:3306": {
"address": "ic-1:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-2:3306": {
"address": "ic-2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-3:3306": {
"address": "ic-3:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://ic@ic-1:3306"
}
The output of
provides the following information:
Cluster
.status()
clusterName
: name assigned to this cluster duringdba.createCluster()
.defaultReplicaSet
: the server instances which belong to an InnoDB cluster and contain the data set.primary
: displayed when the cluster is operating in single-primary mode only. Shows the address of the current primary instance. If this field is not displayed, the cluster is operating in multi-primary mode.ssl
: whether secure connections are used by the cluster or not. Shows values ofREQUIRED
orDISABLED
, depending on how thememberSslMode
option was configured during eithercreateCluster()
oraddInstance()
. The value returned by this parameter corresponds to the value of thegroup_replication_ssl_mode
server variable on the instance. See Securing your Cluster.status
: The status of this element of the cluster. For the overall cluster this describes the high availability provided by this cluster. The status is one of the following:ONLINE
: The instance is online and participating in the cluster.OFFLINE
: The instance has lost connection to the other instances.RECOVERING
: The instance is attempting to synchronize with the cluster by retrieving transactions it needs before it can become anONLINE
member.UNREACHABLE
: The instance has lost communication with the cluster.ERROR
: The instance has encountered an error during the recovery phase or while applying a transaction.ImportantOnce an instance enters
ERROR
state, thesuper_read_only
option is set toON
. To leave theERROR
state you must manually configure the instance withsuper_read_only=OFF
.(MISSING)
: The state of an instance which is part of the configured cluster, but is currently unavailable.NoteThe
MISSING
state is specific to InnoDB cluster, it is not a state generated by Group Replication. MySQL Shell uses this state to indicate instances that are registered in the metadata, but cannot be found in the live cluster view.
topology
: The instances which have been added to the cluster.Host name of instance
: The host name of an instance, for example localhost:3310.role
: what function this instance provides in the cluster. Currently only HA, for high availability.mode
: whether the server is read-write ("R/W") or read-only ("R/O"). From version 8.0.17, this is the current state of thesuper_read_only
variable on the instance. In previous versions the value of mode was derived from whether the instance was serving as a primary or secondary instance. Usually if the instance is a primary, then the mode is "R/W", and if the instance is a secondary the mode is "R/O". From version 8.0.17, themode
attribute also considers whether the cluster has quorum. Any instances in a cluster that have no visible quorum are marked as "R/O".groupInformationSourceMember
: the internal connection used to get information about the cluster, shown as a URI-like connection string. Usually the connection initially used to create the cluster.
To display more information about the cluster use the
extended
option. From version 8.0.17, the
extended
option supports integer or Boolean
values. To configure the additional information that
provides, use the following values:
Cluster
.status({'extended':value
})
0: disables the additional information, the default
1: includes information about the Group Replication Protocol Version, Group name, cluster member UUIDs, cluster member roles and states as reported by Group Replication, and the list of fenced system variables
2: includes information about transactions processed by connection and applier
3: includes more detailed statistics about the replication performed by each cluster member.
Setting extended
using Boolean values is the
equivalent of setting the integer values 0 and 1. In versions
prior to 8.0.17, the extended
option was only
Boolean. Similarly prior versions used the
queryMembers
Boolean option to provide more
information about the instances in the cluster, which is the
equivalent of setting extended
to 3. The
queryMembers
option is deprecated and
scheduled to be removed in a future release.
When you issue
,
or the Cluster
.status({'extended':1})extended
option is set to
true
, the output includes:
the following additional attributes for the
defaultReplicaSet
object:GRProtocolVersion
is the Group Replication Protocol Version being used in the cluster.TipInnoDB cluster manages the Group Replication Protocol version being used automatically, see InnoDB cluster and Group Replication Protocol for more information.
groupName
is the group's name, a UUID.
the following additional attributes for each object of the
topology
object:fenceSysVars
a list containing the name of the fenced system variables which are enabled. Currently the fenced system variables considered areread_only
,super_read_only
andoffline_mode
.memberId
Each cluster member UUID.memberRole
the Member Role as reported by the Group Replication plugin, see theMEMBER_ROLE
column of thereplication_group_members
table.memberState
the Member State as reported by the Group Replication plugin, see theMEMBER_STATE
column of thereplication_group_members
table.
To see information about recovery and regular transaction I/O,
applier worker thread statistics and any lags; applier
coordinator statistics, if parallel apply is enabled; error, and
other information from I/O and applier threads issue use the
values 2 and 3. A value of 3 is the equivalent of setting the
deprecated queryMembers
option to
true
. When you use these values, a connection
to each instance in the cluster is opened so that additional
instance specific statistics can be queried. The exact
statistics that are included in the output depend on the state
and configuration of the instance and the server version. This
information matches that shown in the
replication_group_member_stats
table, see the descriptions of the matching columns for more
information. Instances which are ONLINE
have
a transactions
section included in the
output. Instances which are RECOVERING
have a
recovery
section included in the output. When
you set extended
to 2, in either case, these
sections can contain the following:
appliedCount
: seeCOUNT_TRANSACTIONS_REMOTE_APPLIED
checkedCount
: seeCOUNT_TRANSACTIONS_CHECKED
committedAllMembers
: seeTRANSACTIONS_COMMITTED_ALL_MEMBERS
conflictsDetectedCount
: seeCOUNT_CONFLICTS_DETECTED
inApplierQueueCount
: seeCOUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE
inQueueCount
: seeCOUNT_TRANSACTIONS_IN_QUEUE
lastConflictFree
: seeLAST_CONFLICT_FREE_TRANSACTION
proposedCount
: seeCOUNT_TRANSACTIONS_LOCAL_PROPOSED
rollbackCount
: seeCOUNT_TRANSACTIONS_LOCAL_ROLLBACK
When you set extended
to 3, the
connection
section shows information from the
replication_connection_status
table. The connection
section can contain the
following:
The currentlyQueueing
section has information
about the transactions currently queued:
immediateCommitTimestamp
: seeQUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToNowTime
: seeQUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusNOW()
originalCommitTimestamp
: seeQUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToNowTime
: seeQUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusNOW()
startTimestamp
: seeQUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP
transaction
: seeQUEUEING_TRANSACTION
lastHeartbeatTimestamp
: seeLAST_HEARTBEAT_TIMESTAMP
The lastQueued
section has information about
the most recently queued transaction:
endTimestamp
: seeLAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP
immediateCommitTimestamp
: seeLAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToEndTime
:LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusNOW()
originalCommitTimestamp
: seeLAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToEndTime
:LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusNOW()
queueTime
:LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP
minusLAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP
startTimestamp
: seeLAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP
transaction
: seeLAST_QUEUED_TRANSACTION
receivedHeartbeats
: seeCOUNT_RECEIVED_HEARTBEATS
receivedTransactionSet
: seeRECEIVED_TRANSACTION_SET
threadId
: seeTHREAD_ID
Instances which are using a multithreaded slave have a
workers
section which contains information
about the worker threads, and matches the information shown by
the
replication_applier_status_by_worker
table.
The lastApplied
section shows the following
information about the last transaction applied by the worker:
applyTime
: seeLAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP
minusLAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP
endTimestamp
: seeLAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP
immediateCommitTimestamp
: seeLAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToEndTime
: seeLAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusNOW()
originalCommitTimestamp
: seeLAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToEndTime
: seeLAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusNOW()
startTimestamp
: seeLAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP
transaction
: seeLAST_APPLIED_TRANSACTION
The currentlyApplying
section shows the
following information about the transaction currently being
applied by the worker:
immediateCommitTimestamp
: seeAPPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToNowTime
: seeAPPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusNOW()
originalCommitTimestamp
: seeAPPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToNowTime
: seeAPPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusNOW()
startTimestamp
: seeAPPLYING_TRANSACTION_START_APPLY_TIMESTAMP
transaction
: seeAPPLYING_TRANSACTION
The lastProcessed
section has the following
information about the last transaction processed by the worker:
bufferTime
:LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP
minusLAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP
endTimestamp
: seeLAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP
immediateCommitTimestamp
: seeLAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToEndTime
:LAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusLAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP
originalCommitTimestamp
: seeLAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToEndTime
:LAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusLAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP
startTimestamp
: seeLAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP
transaction
: seeLAST_PROCESSED_TRANSACTION
If parallel applier workers are enabled, then the number of
objects in the workers array in transactions
or recovery
matches the number of configured
workers and an additional coordinator object is included. The
information shown matches the information in the
replication_applier_status_by_coordinator
table. The object can contain:
The currentlyProcessing
section has the
following information about the transaction being processed by
the worker:
immediateCommitTimestamp
: seePROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
immediateCommitToNowTime
:PROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP
minusNOW()
originalCommitTimestamp
: seePROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
originalCommitToNowTime
:PROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP
minusNOW()
startTimestamp
: seePROCESSING_TRANSACTION_START_BUFFER_TIMESTAMP
transaction
: seePROCESSING_TRANSACTION
worker
objects have the following information
if an error was detected in the
replication_applier_status_by_worker
table:
lastErrno
: seeLAST_ERROR_NUMBER
lastError
: seeLAST_ERROR_MESSAGE
lastErrorTimestamp
: seeLAST_ERROR_TIMESTAMP
connection
objects have the following
information if an error was detected in the
replication_connection_status
table:
lastErrno
: seeLAST_ERROR_NUMBER
lastError
: seeLAST_ERROR_MESSAGE
lastErrorTimestamp
: seeLAST_ERROR_TIMESTAMP
coordinator
objects have the following
information if an error was detected in the
replication_applier_status_by_coordinator
table:
lastErrno
: seeLAST_ERROR_NUMBER
lastError
: seeLAST_ERROR_MESSAGE
lastErrorTimestamp
: seeLAST_ERROR_TIMESTAMP
Monitoring Recovery Operations
The output of
shows information about the progress of recovery operations for
instances in Cluster
.status()RECOVERING
state. Information is
shown for instances recovering using either MySQL Clone, or
incremental recovery. Monitor these fields:
The
recoveryStatusText
field includes information about the type of recovery being used. When MySQL Clone is working the field shows “Cloning in progress”. When incremental recovery is working the field shows “Distributed recovery in progress”.When MySQL Clone is being used, the
recovery
field includes a dictionary with the following fields:cloneStartTime
: The timestamp of the start of the clone processcloneState
: The state of the clone progresscurrentStage
: The current stage which the clone process has reachedcurrentStageProgress
: The current stage progress as a percentage of completioncurrentStageState
: The current stage state
Example
output, trimmed for brevity:Cluster
.status()... "recovery": { "cloneStartTime": "2019-07-15 12:50:22.730", "cloneState": "In Progress", "currentStage": "FILE COPY", "currentStageProgress": 61.726837675213865, "currentStageState": "In Progress" }, "recoveryStatusText": "Cloning in progress", ...
When incremental recovery is being used, the
recovery
field includes a dictionary with the following field:state
: The state of thegroup_replication_recovery
channel
Example output
, trimmed for brevity:Cluster
.status()... "recovery": { "state": "ON" }, ...
From MySQL 8.0.16, Group Replication has the concept of a communication protocol for the group, see Section 18.4.1.4, “Setting a Group's Communication Protocol Version” for background information. The Group Replication communication protocol version usually has to be managed explicitly, and set to accommodate the oldest MySQL Server version that you want the group to support. However, InnoDB cluster automatically and transparently manages the communication protocol versions of its members, whenever the cluster topology is changed using AdminAPI operations. A cluster always uses the most recent communication protocol version that is supported by all the instances that are currently part of the cluster or joining it.
When an instance is added to, removed from, or rejoins the cluster, or a rescan or reboot operation is carried out on the cluster, the communication protocol version is automatically set to a version supported by the instance that is now at the earliest MySQL Server version.
When you carry out a rolling upgrade by removing instances from the cluster, upgrading them, and adding them back into the cluster, the communication protocol version is automatically upgraded when the last remaining instance at the old MySQL Server version is removed from the cluster prior to its upgrade.
To see the communication protocol version being used in a
cluster, use the
function with the Cluster
.status()extended
option enabled.
The communication protocol version is returned in the
GRProtocolVersion
field, provided that the
cluster has quorum and no cluster members are unreachable.
The following operations can report information about the MySQL Server version running on the instance:
Cluster
.status()Cluster
.describe()Cluster
.rescan()
The behavior varies depending on the MySQL Server version of the
Cluster
object session.
Cluster
.status()If either of the following requirements are met, a
version
string attribute is returned for each instance JSON object of thetopology
object:The
Cluster
object's current session is version 8.0.11 or later.The
Cluster
object's current session is running a version earlier than version 8.0.11 but theextended
option is set to 3 (or the deprecatedqueryMembers
istrue
).
For example on an instance running version 8.0.16:
"topology": { "ic-1:3306": { "address": "ic-1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE", "version": "8.0.16" }
For example on an instance running version 5.7.24:
"topology": { "ic-1:3306": { "address": "ic-1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE", "version": "5.7.24" }
Cluster
.describe()If the
Cluster
object's current session is version 8.0.11 or later, aversion
string attribute is returned for each instance JSON object of thetopology
objectFor example on an instance running version 8.0.16:
"topology": [ { "address": "ic-1:3306", "label": "ic-1:3306", "role": "HA", "version": "8.0.16" } ]
Cluster
.rescan()If the
Cluster
object's current session is version 8.0.11 or later, and the
operation detects instances which do not belong to the cluster, aCluster
.rescan()version
string attribute is returned for each instance JSON object of thenewlyDiscoveredInstance
object.For example on an instance running version 8.0.16:
"newlyDiscoveredInstances": [ { "host": "ic-4:3306", "member_id": "82a67a06-2ba3-11e9-8cfc-3c6aa7197deb", "name": null, "version": "8.0.16" } ]
Whenever Group Replication stops, the
super_read_only
variable is set
to ON
to ensure no writes are made to the
instance. When you try to use such an instance with the
following AdminAPI commands you are given the choice to
set super_read_only=OFF
on the
instance:
dba.configureInstance()
dba.configureLocalInstance()
dba.dropMetadataSchema()
When AdminAPI encounters an instance which has
super_read_only=ON
, in
interactive mode you are given the choice to set
super_read_only=OFF
. For
example:
mysql-js> var myCluster = dba.dropMetadataSchema()
Are you sure you want to remove the Metadata? [y/N]: y
The MySQL instance at 'localhost:3310' currently has the super_read_only system
variable set to protect it from inadvertent updates from applications. You must
first unset it to be able to perform any changes to this instance.
For more information see:
https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only.
Do you want to disable super_read_only and continue? [y/N]: y
Metadata Schema successfully removed.
The number of current active sessions to the instance is shown.
You must ensure that no applications can write to the instance
inadvertently. By answering y
you confirm
that AdminAPI can write to the instance. If there is
more than one open session to the instance listed, exercise
caution before permitting AdminAPI to set
super_read_only=OFF
.
To force the function to set
super_read_only=OFF
in a
script, pass the clearReadOnly
option set to
true
. For example
dba.configureInstance(
instance
,
{clearReadOnly: true}).
The recommended way to create a user which can administer an
InnoDB cluster is to use the clusterAdmin
and clusterAdminPassword
options with the
dba.createCluster()
operation. If you want to
manually configure a user which can administer an
InnoDB cluster that user requires the following privileges,
all with GRANT OPTION
:
Global privileges on *.* for
RELOAD
,SHUTDOWN
,PROCESS
,FILE
,SELECT
,SUPER
,REPLICATION SLAVE
,REPLICATION CLIENT
,CREATE USER
.Schema specific privileges for
mysql_innodb_cluster_metadata.*
areALTER
,ALTER ROUTINE
,CREATE
,CREATE ROUTINE
,CREATE TEMPORARY TABLES
,CREATE VIEW
,DELETE
,DROP
,EVENT
,EXECUTE
,INDEX
,INSERT
, LOCK TABLES,REFERENCES
, SHOW VIEW,TRIGGER
,UPDATE
; and formysql.*
areINSERT
,UPDATE
,DELETE
;
If only read operations are needed, for example to create a user
for monitoring purposes, an account with more restricted
privileges can be used. To give the user
your_user
the privileges needed to
monitor InnoDB cluster issue:
For more information, see Section 13.7.1, “Account Management Statements”.
Instances running MySQL 8.0.16 and later support the Group
Replication automatic rejoin functionality, which enables you to
configure instances to automatically rejoin the cluster after
being expelled. See
Section 18.6.6, “Responses to Failure Detection and Network Partitioning” for
background information. AdminAPI provides the
autoRejoinTries
option to configure the
number of tries instances make to rejoin the cluster after being
expelled. By default instances do not automatically rejoin the
cluster. You can configure the
autoRejoinTries
option at either the cluster
level or for an individual instance using the following
commands:
dba.createCluster()
Cluster.addInstance()
Cluster.setOption()
Cluster.setInstanceOption()
The autoRejoinTries
option accepts positive
integer values between 0 and 2016 and the default value is 0,
which means that instances do not try to automatically rejoin.
When you are using the automatic rejoin functionality, your
cluster is more tolerant to faults, especially temporary ones
such as unreliable networks. But if quorum has been lost, you
should not expect members to automatically rejoin the cluster,
because majority is required to rejoin instances.
Instances running MySQL version 8.0.12 and later have the
group_replication_exit_state_action
variable, which you can configure using the AdminAPI
exitStateAction
option. This controls what
instances do in the event of leaving the cluster unexpectedly.
By default the exitStateAction
option is
READ_ONLY,
which means that instances which
leave the cluster unexpectedly become read-only. If
exiStateAction
is
ABORT_SERVER
then in the event of leaving the
cluster unexpectedly, the instance shuts down MySQL, and it has
to be started again before it can rejoin the cluster. Note that
when you are using the automatic rejoin functionality, the
action configured by the exitStateAction
option only happens in the event that all attempts to rejoin the
cluster fail.
There is a chance you might connect to an instance and try to configure it using the AdminAPI, but at that moment the instance could be rejoining the cluster. This could happen whenever you use any of these operations:
Cluster.status()
dba.getCluster()
Cluster.rejoinInstance()
Cluster.addInstance()
Cluster.removeInstance()
Cluster.rescan()
Cluster.checkInstanceState()
These operations might provide extra information
while the instance is automatically rejoining the cluster. In
addition, when you are using
,
if the target instance is automatically rejoining the cluster
the operation aborts unless you pass in
Cluster
.removeInstance()force:true
.
Once a sandbox instance is running, it is possible to change its status at any time using the following:
To stop a sandbox instance use
dba.stopSandboxInstance(
. This stops the instance gracefully, unlikeinstance
)dba.killSandboxInstance(
.instance
)To start a sandbox instance use
dba.startSandboxInstance(
.instance
)To kill a sandbox instance use
dba.killSandboxInstance(
. This stops the instance without gracefully stopping it and is useful in simulating unexpected halts.instance
)To delete a sandbox instance use
dba.deleteSandboxInstance(
. This completely removes the sandbox instance from your file system.instance
)
You can remove an instance from a cluster at any time should you
wish to do so. This can be done with the
method, as in the following example:
Cluster
.removeInstance(instance
)
mysql-js> cluster.removeInstance('root@localhost:3310')
The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.
Attempting to leave from the Group Replication group...
The instance 'localhost:3310' was successfully removed from the cluster.
You can optionally pass in the interactive
option to control whether you are prompted to confirm the
removal of the instance from the cluster. In interactive mode,
you are prompted to continue with the removal of the instance
(or not) in case it is not reachable. The
operation ensures that the instance is removed from the metadata
of all the cluster members which are cluster
.removeInstance()ONLINE
,
and the instance itself.
When the instance being removed has transactions which still
need to be applied, AdminAPI waits for up to the number
of seconds configured by the MySQL Shell
dba.gtidWaitTimeout
option for transactions
(GTIDs) to be applied. The MySQL Shell
dba.gtidWaitTimeout
option has a default
value of 60 seconds, see
Configuring MySQL Shell Options for
information on changing the default. If the timeout value
defined by dba.gtidWaitTimeout
is reached
when waiting for transactions to be applied and the
force
option is false
(or
not defined) then an error is issued and the remove operation is
aborted. If the timeout value defined by
dba.gtidWaitTimeout
is reached when waiting
for transactions to be applied and the force
option is set to true
then the operation
continues without an error and removes the instance from the
cluster.
The force
option should only be used with
when you want to ignore any errors, for example unprocessed
transactions or an instance being
Cluster
.removeInstance(instance
)UNREACHABLE
, and do not plan to reuse the
instance with the cluster. Ignoring errors when removing an
instance from the cluster could result in an instance which is
not in synchrony with the cluster, preventing it from
rejoining the cluster at a later time. Only use the
force
option when you plan to no longer use
the instance with the cluster, in all other cases you should
always try to recover the instance and only remove it when it
is available and healthy, in other words with the status
ONLINE
.
When you create a cluster and add instances to it, values such
as the group name, the local address, and the seed instances are
configured automatically by AdminAPI. These default
values are recommended for most deployments, but advanced users
can override the defaults by passing the following options to
the dba.createCluster()
and
.
Cluster
.addInstance()
To customize the name of the replication group created by
InnoDB cluster, pass the groupName
option
to the dba.createCluster()
command. This sets
the
group_replication_group_name
system variable. The name must be a valid UUID.
To customize the address which an instance provides for
connections from other instances, pass the
localAddress
option to the
dba.createCluster()
and
cluster.addInstance()
commands. Specify the
address in the format
.
This sets the
host
:port
group_replication_local_address
system variable on the instance. The address must be accessible
to all instances in the cluster, and must be reserved for
internal cluster communication only. In other words do not use
this address for communication with the instance.
To customize the instances used as seeds when an instance joins
the cluster, pass the groupSeeds
option to
the dba.createCluster()
and
cluster.addInstance()
commands. Seed
instances are contacted when a new instance joins a cluster and
used to provide data to the new instance. The addresses are
specified as a comma separated list such as
host1:port1
,host2:port2
.
This configures the
group_replication_group_seeds
system variable.
For more information see the documentation of the system variables configured by these AdminAPI options.
If an instance leaves the cluster, for example because it lost
connection, and for some reason it could not automatically
rejoin the cluster, it might be necessary to rejoin it to the
cluster at a later stage. To rejoin an instance to a cluster
issue
.
Cluster
.rejoinInstance(instance
)
If the instance has
super_read_only=ON
then you
might need to confirm that AdminAPI can set
super_read_only=OFF
. See
Super Read-only and Instances for more
information.
In the case where an instance has not had it's
configuration persisted (see
Persisting Settings),
upon restart the instance does not rejoin the cluster
automatically. The solution is to issue
cluster.rejoinInstance()
so that the instance
is added to the cluster again and ensure the changes are
persisted. Once the InnoDB cluster configuration is persisted
to the instance's option file it rejoins the cluster
automatically.
If you are rejoining an instance which has changed in some way
then you might have to modify the instance to make the rejoin
process work correctly. For example, when you restore a MySQL Enterprise Backup
backup, the server_uuid
changes. Attempting to rejoin such an instance fails because
InnoDB cluster instances are identified by the
server_uuid
variable. In such a
situation, information about the instance's old
server_uuid
must be removed
from the InnoDB cluster metadata and then a
must be executed to add the instance to the metadata using it's
new Cluster
.rescan()server_uuid
. For example:
cluster.removeInstance("root@instanceWithOldUUID:3306", {force: true})
cluster.rescan()
In this case you must pass the force
option
to the
method because the instance is unreachable from the cluster's
perspective and we want to remove it from the InnoDB cluster
metadata anyway.
Cluster
.removeInstance()
If an instance (or instances) fail, then a cluster can lose its quorum, which is the ability to vote in a new primary. This can happen when a there is a failure of enough instances that there is no longer a majority of the instances which make up the cluster to vote on Group Replication operations. When a cluster loses quorum you can no longer process write transactions with the cluster, or change the cluster's topology, for example by adding, rejoining, or removing instances. However if you have an instance online which contains the InnoDB cluster metadata, it is possible to restore a cluster with quorum. This assumes you can connect to an instance that contains the InnoDB cluster metadata, and that instance can contact the other instances you want to use to restore the cluster.
This operation is potentially dangerous because it can create a split-brain scenario if incorrectly used and should be considered a last resort. Make absolutely sure that there are no partitions of this group that are still operating somewhere in the network, but not accessible from your location.
Connect to an instance which contains the cluster's metadata,
then use the
operation, which restores the cluster based on the metadata on
Cluster
.forceQuorumUsingPartitionOf(instance
)instance
, and then all the instances
that are ONLINE
from the point of view of the
given instance definition are added to the restored cluster.
mysql-js> cluster.forceQuorumUsingPartitionOf("ic@ic-1:3306")
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [ic@ic-1:3306]
Please provide the password for 'ic@ic-1:3306': ******
Restoring the InnoDB cluster ...
The InnoDB cluster was successfully restored using the partition from the instance 'ic@ic-1:3306'.
WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset
are removed or joined back to the group that was restored.
In the event that an instance is not automatically added to the
cluster, for example if its settings were not persisted, use
to manually add the instance back to the cluster.
Cluster
.rejoinInstance()
The restored cluster might not, and does not have to, consist of all of the original instances which made up the cluster. For example, if the original cluster consisted of the following five instances:
ic-1
ic-2
ic-3
ic-4
ic-5
and the cluster experiences a split-brain scenario, with
ic-1
, ic-2
, and
ic-3
forming one partition while
ic-4
and ic-5
form another
partition. If you connect to ic-1
and issue
to restore the cluster the reulting cluster would consist of
these three instances:
Cluster
.forceQuorumUsingPartitionOf('ic@ic-1:3306')
ic-1
ic-2
ic-3
because ic-1
sees ic-2
and
ic-3
as ONLINE
and does
not see ic-4
and ic-5
.
If your cluster suffers from a complete outage, you can ensure
it is reconfigured correctly using
dba.rebootClusterFromCompleteOutage()
. This
operation takes the instance which MySQL Shell is currently
connected to and uses its metadata to recover the cluster. In
the event that a cluster's instances have completely stopped,
the instances must be started and only then can the cluster be
started. For example if the machine a sandbox cluster was
running on has been restarted, and the instances were at ports
3310, 3320 and 3330, issue:
mysql-js> dba.startSandboxInstance(3310)
mysql-js> dba.startSandboxInstance(3320)
mysql-js> dba.startSandboxInstance(3330)
This ensures the sandbox instances are running. In the case of a
production deployment you would have to start the instances
outside of MySQL Shell. Once the instances have started, you
need to connect to an instance with the GTID superset, which
means the instance which had applied the most transaction before
the outage. If you are unsure which instance contains the GTID
superset, connect to any instance and follow the interactive
messages from the
dba.rebootClusterFromCompleteOutage()
, which
detects if the instance you are connected to contains the GTID
superset. Reboot the cluster by issuing:
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();
The dba.rebootClusterFromCompleteOutage()
operation then follows these steps to ensure the cluster is
correctly reconfigured:
The InnoDB cluster metadata found on the instance which MySQL Shell is currently connected to is checked to see if it contains the GTID superset, in other words the transactions applied by the cluster. If the currently connected instance does not contain the GTID superset, the operation aborts with that information. See the subsequent paragraphs for more information.
If the instance contains the GTID superset, the cluster is recovered based on the metadata of the instance.
Assuming you are running MySQL Shell in interactive mode, a wizard is run that checks which instances of the cluster are currently reachable and asks if you want to rejoin any discovered instances to the rebooted cluster.
Similarly, in interactive mode the wizard also detects instances which are currently not reachable and asks if you would like to remove such instances from the rebooted cluster.
If you are not using MySQL Shell's interactive mode, you
can use the rejoinInstances
and
removeInstances
options to manually configure
instances which should be joined or removed during the reboot of
the cluster.
If you encounter an error such as The active session
instance isn't the most updated in comparison with the ONLINE
instances of the Cluster's metadata. then the
instance you are connected to does not have the GTID superset of
transactions applied by the cluster. In this situation, connect
MySQL Shell to the instance suggested in the error message and
issue dba.rebootClusterFromCompleteOutage()
from that instance.
To manually detect which instance has the GTID superset rather
than using the interactive wizard, check the
gtid_executed
variable on
each instance. For example issue:
The instance which has applied the largest GTID set of transactions contains the GTID superset.
If this process fails, and the cluster metadata has become badly
corrupted, you might need to drop the metadata and create the
cluster again from scratch. You can drop the cluster metadata
using dba.dropMetadataSchema()
.
The dba.dropMetadataSchema()
method should
only be used as a last resort, when it is not possible to
restore the cluster. It cannot be undone.
If you make configuration changes to a cluster outside of the
AdminAPI commands, for example by changing an
instance's configuration manually to resolve configuration
issues or after the loss of an instance, you need to update the
InnoDB cluster metadata so that it matches the current
configuration of instances. In these cases, use the
operation, which enables you to update the InnoDB cluster
metadata either manually or using an interactive wizard. The
Cluster
.rescan()
operation can detect new active instances that are not
registered in the metadata and add them, or obsolete instances
(no longer active) still registered in the metadata, and remove
them. You can automatically update the metadata depending on the
instances found by the command, or you can specify a list of
instance addresses to either add to the metadata or remove from
the metadata. You can also update the topology mode stored in
the metadata, for example after changing from single-primary
mode to multi-primary mode outside of AdminAPI.
Cluster
.rescan()
The syntax of the command is
.
The Cluster
.rescan([options])options
dictionary supports the
following:
interactive
: boolean value used to disable or enable the wizards in the command execution. Controls whether prompts and confirmations are provided. The default value is equal to MySQL Shell wizard mode, specified byshell.options.useWizards
.addInstances
: list with the connection data of the new active instances to add to the metadata, or “auto” to automatically add missing instances to the metadata. The value “auto” is case-insensitive.Instances specified in the list are added to the metadata, without prompting for confirmation
In interactive mode, you are prompted to confirm the addition of newly discovered instances that are not included in the
addInstances
optionIn non-interactive mode, newly discovered instances that are not included in the
addInstances
option are reported in the output, but you are not prompted to add them
removeInstances
: list with the connection data of the obsolete instances to remove from the metadata, or “auto” to automatically remove obsolete instances from the metadata.Instances specified in the list are removed from the metadata, without prompting for confirmation
In interactive mode, you are prompted to confirm the removal of obsolete instances that are not included in the
removeInstances
optionIn non-interactive mode, obsolete instances that are not included in the
removeInstances
option are reported in the output but you are not prompted to remove them
updateTopologyMode
: boolean value used to indicate if the topology mode (single-primary or multi-primary) in the metadata should be updated (true) or not (false) to match the one being used by the cluster. By default, the metadata is not updated (false).If the value is
true
then the InnoDB cluster metadata is compared to the current mode being used by Group Replication, and the metadata is updated if necessary. Use this option to update the metadata after making changes to the topology mode of your cluster outside of AdminAPI.If the value is
false
then InnoDB cluster metadata about the cluster's topology mode is not updated even if it is different from the topology used by the cluster's Group Replication groupIf the option is not specified and the topology mode in the metadata is different from the topology used by the cluster's Group Replication group, then:
In interactive mode, you are prompted to confirm the update of the topology mode in the metadata
In non-interactive mode, if there is a difference between the topology used by the cluster's Group Replication group and the InnoDB cluster metadata, it is reported and no changes are made to the metadata
When the metadata topology mode is updated to match the Group Replication mode, the auto-increment settings on all instances are updated as described at InnoDB cluster and Auto-increment.
The cluster.checkInstanceState()
function can
be used to verify the existing data on an instance does not
prevent it from joining a cluster. This process works by
validating the instance's global transaction identifier (GTID)
state compared to the GTIDs already processed by the cluster.
For more information on GTIDs see
Section 17.1.3.1, “GTID Format and Storage”. This check enables
you to determine if an instance which has processed transactions
can be added to the cluster.
The following demonstrates issuing this in a running MySQL Shell:
mysql-js> cluster.checkInstanceState('ic@ic-4:3306')
The output of this function can be one of the following:
OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster
OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances
ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances
ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances
Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.
To dissolve an InnoDB cluster you connect to a read-write
instance, for example the primary in a single-primary cluster,
and use the
command. This removes all metadata and configuration associated
with the cluster, and disables Group Replication on the
instances. Any data that was replicated between the instances is
not removed.
Cluster
.dissolve()
There is no way to undo the dissolving of a cluster. To create
it again use dba.createCluster()
.
The
operation can only configure instances which are
Cluster
.dissolve()ONLINE
or reachable. If members of a cluster
cannot be reached by the member where you issued the
command you have to decide how the dissolve operation should
proceed. If there is any chance you want to rejoin any instances
that are identified as missing from the cluster, it is strongly
recommended to cancel the dissolve operation and first bring the
missing instances back online, before proceeding with a dissolve
operation. This ensures that all instances can have their
metadata updated correctly, and that there is no chance of a
split-brain situation. However, if the instances from the
cluster which cannot be reached have permanently left the
cluster there could be no choice but to force the dissolve
operation, which means that the missing instances are ignored
and only online instances are affected by the operation.
Cluster
.dissolve()
Forcing the dissolve operation to ignore cluster instances can result in instances which could not be reached during the dissolve operation continuing to operate, creating the risk of a split-brain situation. Only ever force a dissolve operation to ignore missing instances if you are sure there is no chance of the instance coming online again.
In interactive mode, if members of a cluster are not reachable during a dissolve operation then an interactive prompt is displayed, for example:
mysql-js> Cluster.dissolve()
The cluster still has the following registered ReplicaSets:
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"topology": [
{
"address": "ic-1:3306",
"label": "ic-1:3306",
"role": "HA"
},
{
"address": "ic-2:3306",
"label": "ic-2:3306",
"role": "HA"
},
{
"address": "ic-3:3306",
"label": "ic-3:3306",
"role": "HA"
}
]
}
}
WARNING: You are about to dissolve the whole cluster and lose the high
availability features provided by it. This operation cannot be reverted. All
members will be removed from their ReplicaSet and replication will be stopped,
internal recovery user accounts and the cluster metadata will be dropped. User
data will be maintained intact in all instances.
Are you sure you want to dissolve the cluster? [y/N]: y
ERROR: The instance 'ic-2:3306' cannot be removed because it is on a '(MISSING)'
state. Please bring the instance back ONLINE and try to dissolve the cluster
again. If the instance is permanently not reachable, then you can choose to
proceed with the operation and only remove the instance from the Cluster
Metadata.
Do you want to continue anyway (only the instance metadata will be removed)?
[y/N]: y
Instance 'ic-3:3306' is attempting to leave the cluster... Instance 'ic-1:3306'
is attempting to leave the cluster...
WARNING: The cluster was successfully dissolved, but the following instance was
skipped: 'ic-2:3306'. Please make sure this instance is permanently unavailable
or take any necessary manual action to ensure the cluster is fully dissolved.
In this example, the cluster consisted of three instances, one
of which was offline when dissolve was issued. The error is
caught, and you are given the choice how to proceed. In this
case the missing ic-2
instance is ignored and
the reachable members have their metadata updated.
When MySQL Shell is running in non-interactive mode, for
example when running a batch file, you can configure the
behavior of the
operation using the Cluster
.dissolve()force
option. To force
the dissolve operation to ignore any instances which are
unreachable, issue:
mysql-js> Cluster.dissolve({force: true})
Any instances which can be reached are removed from the cluster, and any unreachable instances are ignored. The warnings in this section about forcing the removal of missing instances from a cluster apply equally to this technique of forcing the dissolve operation.
You can also use the interactive
option with
the
operation to override the mode which MySQL Shell is running
in, for example to make the interactive prompt appear when
running a batch script. For example:
Cluster
.dissolve()
mysql-js> Cluster.dissolve({interactive: true})
The dba.gtidWaitTimeout
MySQL Shell option
configures how long the
operation waits for cluster transactions to be applied before
removing a target instance from the cluster, but only if the
target instance is Cluster
.dissolve()ONLINE
. An error is issued
if the timeout is reached when waiting for cluster transactions
to be applied on any of the instances being removed, except if
force: true is used, which skips the error in that case.
After issuing cluster.dissolve()
, any
variable assigned to the
object
is no longer valid.
Cluster
Server instances can be configured to use secure connections. For general information on using SSL with MySQL see Section 6.3, “Using Encrypted Connections”. This section explains how to configure a cluster to use SSL. An additional security possibility is to configure which servers can access the cluster, see Creating a Whitelist of Servers.
Once you have configured a cluster to use SSL you must add the
servers to the ipWhitelist
.
When using dba.createCluster()
to set up a
cluster, if the server instance provides SSL encryption then it
is automatically enabled on the seed instance. Pass the
memberSslMode
option to the
dba.createCluster()
method to specify a
different SSL mode. The SSL mode of a cluster can only be set at
the time of creation. The memberSslMode
option is a string that configures the SSL mode to be used, it
defaults to AUTO
. The permitted values are
DISABLED
, REQUIRED
, and
AUTO
. These modes are defined as:
Setting
createCluster({memberSslMode:'DISABLED'})
ensures SSL encryption is disabled for the seed instance in the cluster.Setting
createCluster({memberSslMode:'REQUIRED'})
then SSL encryption is enabled for the seed instance in the cluster. If it cannot be enabled an error is raised.Setting
createCluster({memberSslMode:'AUTO'})
(the default) then SSL encryption is automatically enabled if the server instance supports it, or disabled if the server does not support it.
When using the commercial version of MySQL, SSL is enabled by default and you might need to configure the whitelist for all instances. See Creating a Whitelist of Servers.
When you issue the cluster.addInstance()
and
cluster.rejoinInstance()
commands, SSL
encryption on the instance is enabled or disabled based on the
setting found for the seed instance.
When using createCluster()
with the
adoptFromGR
option to adopt an existing Group
Replication group, no SSL settings are changed on the adopted
cluster:
memberSslMode
cannot be used withadoptFromGR
.If the SSL settings of the adopted cluster are different from the ones supported by the MySQL Shell, in other words SSL for Group Replication recovery and Group Communication, both settings are not modified. This means you are not be able to add new instances to the cluster, unless you change the settings manually for the adopted cluster.
MySQL Shell always enables or disables SSL for the cluster for
both Group Replication recovery and Group Communication, see
Section 18.5.2, “Group Replication Secure Socket Layer (SSL) Support”.
A verification is performed and an error issued in case those
settings are different for the seed instance (for example as the
result of a dba.createCluster()
using
adoptFromGR
) when adding a new instance to
the cluster. SSL encryption must be enabled or disabled for all
instances in the cluster. Verifications are performed to ensure
that this invariant holds when adding a new instance to the
cluster.
The deploySandboxInstance()
command attempts
to deploy sandbox instances with SSL encryption support by
default. If it is not possible, the server instance is deployed
without SSL support. Use the ignoreSslError
option set to false to ensure that sandbox instances are
deployed with SSL support, issuing an error if SSL support
cannot be provided. When ignoreSslError
is
true, which is the default, no error is issued during the
operation if the SSL support cannot be provided and the server
instance is deployed without SSL support.
When using a cluster's createCluster()
,
addInstance()
, and
rejoinInstance()
methods you can optionally
specify a list of approved servers that belong to the cluster,
referred to as a whitelist. By specifying the whitelist
explicitly in this way you can increase the security of your
cluster because only servers in the whitelist can connect to the
cluster.
Using the ipWhitelist
option configures the
group_replication_ip_whitelist
system variable on the instance. By default, if not specified
explicitly, the whitelist is automatically set to the private
network addresses that the server has network interfaces on. To
configure the whitelist, specify the servers to add with the
ipWhitelist
option when using the method.
Pass the servers as a comma separated list, surrounded by
quotes. For example:
mysql-js> cluster.addInstance("ic@ic-3:3306", {ipWhitelist: "203.0.113.0/24, 198.51.100.110"})
This configures the instance to only accept connections from
servers at addresses 203.0.113.0/24
and
198.51.100.110
. The whitelist can also
include host names, which are resolved only when a connection
request is made by another server.
Host names are inherently less secure than IP addresses in a whitelist. MySQL carries out FCrDNS verification, which provides a good level of protection, but can be compromised by certain types of attack. Specify host names in your whitelist only when strictly necessary, and ensure that all components used for name resolution, such as DNS servers, are maintained under your control. You can also implement name resolution locally using the hosts file, to avoid the use of external components.
You can automate cluster configuration with scripts, which can be run using MySQL Shell. For example:
shell> mysqlsh -f setup-innodb-cluster.js
Any command line options specified after the script file name
are passed to the script and not to
MySQL Shell. You can access those options using the
os.argv
array in JavaScript, or the
sys.argv
array in Python. In both cases,
the first option picked up in the array is the script name.
The contents of an example script file is shown here:
print('InnoDB cluster sandbox set up\n');
print('==================================\n');
print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n');
print('The instances will be installed in ~/mysql-sandboxes.\n');
print('They will run on ports 3310, 3320 and 3330.\n\n');
var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});
try {
print('\nDeploying the sandbox instances.');
dba.deploySandboxInstance(3310, {password: dbPass});
print('.');
dba.deploySandboxInstance(3320, {password: dbPass});
print('.');
dba.deploySandboxInstance(3330, {password: dbPass});
print('.\nSandbox instances deployed successfully.\n\n');
print('Setting up InnoDB cluster...\n');
shell.connect('root@localhost:3310', dbPass);
var cluster = dba.createCluster("prodCluster");
print('Adding instances to the cluster.');
cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
print('.');
cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
print('.\nInstances successfully added to the cluster.');
print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
print('\nThe InnoDB cluster could not be created.\n\nError: ' +
+ e.message + '\n');
}
You can optionally configure how a single-primary cluster elects
a new primary, for example to prefer one instance as the new
primary to fail over to. Use the memberWeight
option and pass it to the dba.createCluster()
and Cluster.addInstance()
methods when
creating your cluster. The memberWeight
option accepts an integer value between 0 and 100, which is a
percentage weight for automatic primary election on failover.
When an instance has a higher precentage number set by
memberWeight
, it is more likely to be elected
as primary in a single-primary cluster. When a primary election
takes place, if multiple instances have the same
memberWeight
value, the instances are then
prioritized based on their server UUID in lexicographical order
(the lowest) and by picking the first one.
Setting the value of memberWeight
configures
the
group_replication_member_weight
system variable on the instance. Group Replication limits the
value range from 0 to 100, automatically adjusting it if a
higher or lower value is provided. Group Replication uses a
default value of 50 if no value is provided. See
Section 18.1.3.1, “Single-Primary Mode” for more
information.
For example to configure a cluster where ic-3
is the preferred instance to fail over to in the event that
ic-1
, the current primary, leaves the cluster
unexpectedly use memberWeight
as follows:
dba.createCluster('cluster1', {memberWeight:35})
var mycluster = dba.getCluster()
mycluster.addInstance('ic@ic2', {memberWeight:25})
mycluster.addInstance('ic@ic3', {memberWeight:50})
Group Replication provides the ability to specify the failover
guarantees (eventual or “read your writes”) if a
primary failover happens in single-primary mode (see
Section 18.4.2.2, “Configuring Transaction Consistency Guarantees”).
You can configure the failover guarantees of an InnoDB cluster
at creation by passing the consistency
option
(prior to version 8.0.16 this option was the
failoverConsistency
option, which is now
deprecated) to the dba.createCluster()
operation, which configures the
group_replication_consistency
system variable on the seed instance. This option defines the
behavior of a new fencing mechanism used when a new primary is
elected in a single-primary group. The fencing restricts
connections from writing and reading from the new primary until
it has applied any pending backlog of changes that came from the
old primary (sometimes referred to as “read your
writes”). While the fencing mechanism is in place,
applications effectively do not see time going backward for a
short period of time while any backlog is applied. This ensures
that applications do not read stale information from the newly
elected primary.
The consistency
option is only supported if
the target MySQL server version is 8.0.14 or later, and
instances added to a cluster which has been configured with the
consistency
option are automatically
configured to have
group_replication_consistency
the same on all cluster members that have support for the
option. The variable default value is controlled by Group
Replication and is EVENTUAL
, change the
consistency
option to
BEFORE_ON_PRIMARY_FAILOVER
to enable the
fencing mechanism. Alternatively use
consistency=0
for EVENTUAL
and consistency=1
for
BEFORE_ON_PRIMARY_FAILOVER
.
Using the consistency
option on a
multi-primary InnoDB cluster has no effect but is allowed
because the cluster can later be changed into single-primary
mode with the
operation.
Cluster
.switchToSinglePrimaryMode()
By default, an InnoDB cluster runs in single-primary mode, where the cluster has one primary server that accepts read and write queries (R/W), and all of the remaining instances in the cluster accept only read queries (R/O). When you configure a cluster to run in multi-primary mode, all of the instances in the cluster are primaries, which means that they accept both read and write queries (R/W). If a cluster has all of its instances running MySQL server version 8.0.15 or later, you can make changes to the topology of the cluster while the cluster is online. In previous versions it was necessary to completely dissolve and re-create the cluster to make the configuration changes. This uses the group action coordinator exposed through the UDFs described at Section 18.4.1, “Configuring an Online Group”, and as such you should observe the rules for configuring online groups.
multi-primary mode is considered an advanced mode
Usually a single-primary cluster elects a new primary when the
current primary leaves the cluster unexpectedly, for example due
to an unexpected halt. The election process is normally used to
choose which of the current secondaries becomes the new primary.
To override the election process and force a specific server to
become the new primary, use the
function, where Cluster
.setPrimaryInstance(instance
)instance
specifies
the connection to the instance which should become the new
primary. This enables you to configure the underlying Group
Replication group to choose a specific instance as the new
primary, bypassing the election process.
You can change the mode (sometimes described as the topology) which a cluster is running in between single-primary and multi-primary using the following operations:
, which switches the cluster to multi-primary mode. All instances become primaries.Cluster
.switchToMultiPrimaryMode()
, which switches the cluster to single-primary mode. IfCluster
.switchToSinglePrimaryMode([instance
])instance
is specified, it becomes the primary and all the other instances become secondaries. Ifinstance
is not specified, the new primary is the instance with the highest member weight (and the lowest UUID in case of a tie on member weight).
You can check and modify the settings in place for an InnoDB cluster while the instances are online. To check the current settings of a cluster, use the following operation:
, which lists the cluster configuration options for its ReplicaSets and instances. A boolean optionCluster
.options()all
can also be specified to include information about all Group Replication system variables in the output.
You can configure the options of an InnoDB cluster at a cluster level or instance level, while instances remain online. This avoids the need to remove, reconfigure and then again add the instance to change InnoDB cluster options. Use the following operations:
to change the settings of all cluster instances globally or cluster global settings such asCluster
.setOption(option
,value
)clusterName
.
to change the settings of individual cluster instancesCluster
.setInstanceOption(instance,option
,value
)
The way which you use InnoDB cluster options with the operations listed depends on whether the option can be changed to be the same on all instances or not. These options are changeable at both the cluster (all instances) and per instance level:
exitStateAction
memberWeight
This option is changeable at the per instance level only:
label
These options are changeable at the cluster level only:
consistency
expelTimeout
clusterName
When you are using an instance as part of an InnoDB cluster,
the auto_increment_increment
and auto_increment_offset
variables are configured to avoid the possibility of auto
increment collisions for multi-primary clusters up to a size of
9 (the maximum supported size of a Group Replication group). The
logic used to configure these variables can be summarized as:
If the group is running in single-primary mode, then set
auto_increment_increment
to 1 andauto_increment_offset
to 2.If the group is running in multi-primary mode, then when the cluster has 7 instances or less set
auto_increment_increment
to 7 andauto_increment_offset
to 1 +server_id
% 7. If a multi-primary cluster has 8 or more instances setauto_increment_increment
to the number of instances andauto_increment_offset
to 1 +server_id
% the number of instances.
Deutsche Übersetzung
Sie haben gebeten, diese Seite auf Deutsch zu besuchen. Momentan ist nur die Oberfläche übersetzt, aber noch nicht der gesamte Inhalt.Wenn Sie mir bei Übersetzungen helfen wollen, ist Ihr Beitrag willkommen. Alles, was Sie tun müssen, ist, sich auf der Website zu registrieren und mir eine Nachricht zu schicken, in der Sie gebeten werden, Sie der Gruppe der Übersetzer hinzuzufügen, die Ihnen die Möglichkeit gibt, die gewünschten Seiten zu übersetzen. Ein Link am Ende jeder übersetzten Seite zeigt an, dass Sie der Übersetzer sind und einen Link zu Ihrem Profil haben.
Vielen Dank im Voraus.
Dokument erstellt 26/06/2006, zuletzt geändert 26/10/2018
Quelle des gedruckten Dokuments:https://www.gaudry.be/de/mysql-rf-mysql-innodb-cluster-working-with-cluster.html
Die Infobro ist eine persönliche Seite, deren Inhalt in meiner alleinigen Verantwortung liegt. Der Text ist unter der CreativeCommons-Lizenz (BY-NC-SA) verfügbar. Weitere Informationen auf die Nutzungsbedingungen und dem Autor.
Referenzen
Diese Verweise und Links verweisen auf Dokumente, die während des Schreibens dieser Seite konsultiert wurden, oder die zusätzliche Informationen liefern können, aber die Autoren dieser Quellen können nicht für den Inhalt dieser Seite verantwortlich gemacht werden.
Der Autor Diese Website ist allein dafür verantwortlich, wie die verschiedenen Konzepte und Freiheiten, die mit den Nachschlagewerken gemacht werden, hier dargestellt werden. Denken Sie daran, dass Sie mehrere Quellinformationen austauschen müssen, um das Risiko von Fehlern zu reduzieren.