Rechercher dans le manuel MySQL
16.6 The BLACKHOLE Storage Engine
The BLACKHOLE
storage engine acts as a
“black hole” that accepts data but throws it away and
does not store it. Retrievals always return an empty result:
- Query OK, 0 rows affected (0.03 sec)
- Query OK, 2 rows affected (0.00 sec)
To enable the BLACKHOLE
storage engine if you
build MySQL from source, invoke CMake with the
-DWITH_BLACKHOLE_STORAGE_ENGINE
option.
To examine the source for the BLACKHOLE
engine,
look in the sql
directory of a MySQL source
distribution.
When you create a BLACKHOLE
table, the server
creates the table definition in the global data dictionary. There
are no files associated with the table.
The BLACKHOLE
storage engine supports all kinds
of indexes. That is, you can include index declarations in the table
definition.
The BLACKHOLE
storage engine does not support
partitioning.
You can check whether the BLACKHOLE
storage
engine is available with the SHOW
ENGINES
statement.
Inserts into a BLACKHOLE
table do not store any
data, but if statement based binary logging is enabled, the SQL
statements are logged and replicated to slave servers. This can be
useful as a repeater or filter mechanism.
Suppose that your application requires slave-side filtering rules,
but transferring all binary log data to the slave first results in
too much traffic. In such a case, it is possible to set up on the
master host a “dummy” slave process whose default
storage engine is BLACKHOLE
, depicted as follows:
The master writes to its binary log. The “dummy”
mysqld process acts as a slave, applying the
desired combination of replicate-do-*
and
replicate-ignore-*
rules, and writes a new,
filtered binary log of its own. (See
Section 17.1.6, “Replication and Binary Logging Options and Variables”.) This filtered log is
provided to the slave.
The dummy process does not actually store any data, so there is little processing overhead incurred by running the additional mysqld process on the replication master host. This type of setup can be repeated with additional replication slaves.
INSERT
triggers for
BLACKHOLE
tables work as expected. However,
because the BLACKHOLE
table does not actually
store any data, UPDATE
and
DELETE
triggers are not activated:
The FOR EACH ROW
clause in the trigger definition
does not apply because there are no rows.
Other possible uses for the BLACKHOLE
storage
engine include:
Verification of dump file syntax.
Measurement of the overhead from binary logging, by comparing performance using
BLACKHOLE
with and without binary logging enabled.BLACKHOLE
is essentially a “no-op” storage engine, so it could be used for finding performance bottlenecks not related to the storage engine itself.
The BLACKHOLE
engine is transaction-aware, in the
sense that committed transactions are written to the binary log and
rolled-back transactions are not.
Blackhole Engine and Auto Increment Columns
The Blackhole engine is a no-op engine. Any operations performed on a table using Blackhole will have no effect. This should be born in mind when considering the behavior of primary key columns that auto increment. The engine will not automatically increment field values, and does not retain auto increment field state. This has important implications in replication.
Consider the following replication scenario where all three of the following conditions apply:
On a master server there is a blackhole table with an auto increment field that is a primary key.
On a slave the same table exists but using the MyISAM engine.
Inserts are performed into the master's table without explicitly setting the auto increment value in the
INSERT
statement itself or through using aSET INSERT_ID
statement.
In this scenario replication will fail with a duplicate entry error on the primary key column.
In statement based replication, the value of
INSERT_ID
in the context event will always be the
same. Replication will therefore fail due to trying insert a row
with a duplicate value for a primary key column.
In row based replication, the value that the engine returns for the row always be the same for each insert. This will result in the slave attempting to replay two insert log entries using the same value for the primary key column, and so replication will fail.
Column Filtering
When using row-based replication,
(binlog_format=ROW
), a slave where
the last columns are missing from a table is supported, as described
in the section
Section 17.4.1.9, “Replication with Differing Table Definitions on Master and Slave”.
This filtering works on the slave side, that is, the columns are copied to the slave before they are filtered out. There are at least two cases where it is not desirable to copy the columns to the slave:
If the data is confidential, so the slave server should not have access to it.
If the master has many slaves, filtering before sending to the slaves may reduce network traffic.
Master column filtering can be achieved using the
BLACKHOLE
engine. This is carried out in a way
similar to how master table filtering is achieved - by using the
BLACKHOLE
engine and the
--replicate-do-table
or
--replicate-ignore-table
option.
The setup for the master is:
The setup for the trusted slave is:
The setup for the untrusted slave is:
Traduction non disponible
Le manuel MySQL n'est pas encore traduit en français sur l'infobrol. Seule la version anglaise est disponible pour l'instant.
Document créé le 26/06/2006, dernière modification le 26/10/2018
Source du document imprimé : https://www.gaudry.be/mysql-rf-blackhole-storage-engine.html
L'infobrol est un site personnel dont le contenu n'engage que moi. Le texte est mis à disposition sous licence CreativeCommons(BY-NC-SA). Plus d'info sur les conditions d'utilisation et sur l'auteur.
Références
Ces références et liens indiquent des documents consultés lors de la rédaction de cette page, ou qui peuvent apporter un complément d'information, mais les auteurs de ces sources ne peuvent être tenus responsables du contenu de cette page.
L'auteur de ce site est seul responsable de la manière dont sont présentés ici les différents concepts, et des libertés qui sont prises avec les ouvrages de référence. N'oubliez pas que vous devez croiser les informations de sources multiples afin de diminuer les risques d'erreurs.