Rechercher dans le manuel MySQL
15.7.3 Locks Set by Different SQL Statements in InnoDB
A locking read, an
UPDATE
, or a
DELETE
generally set record locks
on every index record that is scanned in the processing of the SQL
statement. It does not matter whether there are
WHERE
conditions in the statement that would
exclude the row. InnoDB
does not remember the
exact WHERE
condition, but only knows which
index ranges were scanned. The locks are normally
next-key locks that also
block inserts into the “gap” immediately before the
record. However, gap locking
can be disabled explicitly, which causes next-key locking not to
be used. For more information, see
Section 15.7.1, “InnoDB Locking”. The transaction isolation level
also can affect which locks are set; see
Section 15.7.2.1, “Transaction Isolation Levels”.
If a secondary index is used in a search and index record locks to
be set are exclusive, InnoDB
also retrieves the
corresponding clustered index records and sets locks on them.
If you have no indexes suitable for your statement and MySQL must scan the entire table to process the statement, every row of the table becomes locked, which in turn blocks all inserts by other users to the table. It is important to create good indexes so that your queries do not unnecessarily scan many rows.
InnoDB
sets specific types of locks as follows.
SELECT ... FROM
is a consistent read, reading a snapshot of the database and setting no locks unless the transaction isolation level is set toSERIALIZABLE
. ForSERIALIZABLE
level, the search sets shared next-key locks on the index records it encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.SELECT ... FOR UPDATE
andSELECT ... FOR SHARE
statements that use a unique index acquire locks for scanned rows, and release the locks for rows that do not qualify for inclusion in the result set (for example, if they do not meet the criteria given in theWHERE
clause). However, in some cases, rows might not be unlocked immediately because the relationship between a result row and its original source is lost during query execution. For example, in aUNION
, scanned (and locked) rows from a table might be inserted into a temporary table before evaluation whether they qualify for the result set. In this circumstance, the relationship of the rows in the temporary table to the rows in the original table is lost and the latter rows are not unlocked until the end of query execution.For locking reads (
SELECT
withFOR UPDATE
orFOR SHARE
),UPDATE
, andDELETE
statements, the locks that are taken depend on whether the statement uses a unique index with a unique search condition, or a range-type search condition.For a unique index with a unique search condition,
InnoDB
locks only the index record found, not the gap before it.For other search conditions, and for non-unique indexes,
InnoDB
locks the index range scanned, using gap locks or next-key locks to block insertions by other sessions into the gaps covered by the range. For information about gap locks and next-key locks, see Section 15.7.1, “InnoDB Locking”.
For index records the search encounters,
SELECT ... FOR UPDATE
blocks other sessions from doingSELECT ... FOR SHARE
or from reading in certain transaction isolation levels. Consistent reads ignore any locks set on the records that exist in the read view.UPDATE ... WHERE ...
sets an exclusive next-key lock on every record the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.When
UPDATE
modifies a clustered index record, implicit locks are taken on affected secondary index records. TheUPDATE
operation also takes shared locks on affected secondary index records when performing duplicate check scans prior to inserting new secondary index records, and when inserting new secondary index records.DELETE FROM ... WHERE ...
sets an exclusive next-key lock on every record the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.INSERT
sets an exclusive lock on the inserted row. This lock is an index-record lock, not a next-key lock (that is, there is no gap lock) and does not prevent other sessions from inserting into the gap before the inserted row.Prior to inserting the row, a type of gap lock called an insert intention gap lock is set. This lock signals the intent to insert in such a way that multiple transactions inserting into the same index gap need not wait for each other if they are not inserting at the same position within the gap. Suppose that there are index records with values of 4 and 7. Separate transactions that attempt to insert values of 5 and 6 each lock the gap between 4 and 7 with insert intention locks prior to obtaining the exclusive lock on the inserted row, but do not block each other because the rows are nonconflicting.
If a duplicate-key error occurs, a shared lock on the duplicate index record is set. This use of a shared lock can result in deadlock should there be multiple sessions trying to insert the same row if another session already has an exclusive lock. This can occur if another session deletes the row. Suppose that an
InnoDB
tablet1
has the following structure:Now suppose that three sessions perform the following operations in order:
Session 1:
Session 2:
Session 3:
Session 1:
The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 rolls back, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.
A similar situation occurs if the table already contains a row with key value 1 and three sessions perform the following operations in order:
Session 1:
Session 2:
Session 3:
Session 1:
The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 commits, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.
INSERT ... ON DUPLICATE KEY UPDATE
differs from a simpleINSERT
in that an exclusive lock rather than a shared lock is placed on the row to be updated when a duplicate-key error occurs. An exclusive index-record lock is taken for a duplicate primary key value. An exclusive next-key lock is taken for a duplicate unique key value.REPLACE
is done like anINSERT
if there is no collision on a unique key. Otherwise, an exclusive next-key lock is placed on the row to be replaced.INSERT INTO T SELECT ... FROM S WHERE ...
sets an exclusive index record lock (without a gap lock) on each row inserted intoT
. If the transaction isolation level isREAD COMMITTED
,InnoDB
does the search onS
as a consistent read (no locks). Otherwise,InnoDB
sets shared next-key locks on rows fromS
.InnoDB
has to set locks in the latter case: During roll-forward recovery using a statement-based binary log, every SQL statement must be executed in exactly the same way it was done originally.CREATE TABLE ... SELECT ...
performs theSELECT
with shared next-key locks or as a consistent read, as forINSERT ... SELECT
.When a
SELECT
is used in the constructsREPLACE INTO t SELECT ... FROM s WHERE ...
orUPDATE t ... WHERE col IN (SELECT ... FROM s ...)
,InnoDB
sets shared next-key locks on rows from tables
.While initializing a previously specified
AUTO_INCREMENT
column on a table,InnoDB
sets an exclusive lock on the end of the index associated with theAUTO_INCREMENT
column. In accessing the auto-increment counter,InnoDB
uses a specificAUTO-INC
table lock mode where the lock lasts only to the end of the current SQL statement, not to the end of the entire transaction. Other sessions cannot insert into the table while theAUTO-INC
table lock is held; see Section 15.7.2, “InnoDB Transaction Model”.InnoDB
fetches the value of a previously initializedAUTO_INCREMENT
column without setting any locks.If a
FOREIGN KEY
constraint is defined on a table, any insert, update, or delete that requires the constraint condition to be checked sets shared record-level locks on the records that it looks at to check the constraint.InnoDB
also sets these locks in the case where the constraint fails.LOCK TABLES
sets table locks, but it is the higher MySQL layer above theInnoDB
layer that sets these locks.InnoDB
is aware of table locks ifinnodb_table_locks = 1
(the default) andautocommit = 0
, and the MySQL layer aboveInnoDB
knows about row-level locks.Otherwise,
InnoDB
's automatic deadlock detection cannot detect deadlocks where such table locks are involved. Also, because in this case the higher MySQL layer does not know about row-level locks, it is possible to get a table lock on a table where another session currently has row-level locks. However, this does not endanger transaction integrity, as discussed in Section 15.7.5.2, “Deadlock Detection and Rollback”. See also Section 15.6.1.6, “Limits on InnoDB Tables”.
Document created the 26/06/2006, last modified the 26/10/2018
Source of the printed document:https://www.gaudry.be/en/mysql-rf-innodb-locks-set.html
The infobrol is a personal site whose content is my sole responsibility. The text is available under CreativeCommons license (BY-NC-SA). More info on the terms of use and the author.
References
These references and links indicate documents consulted during the writing of this page, or which may provide additional information, but the authors of these sources can not be held responsible for the content of this page.
The author This site is solely responsible for the way in which the various concepts, and the freedoms that are taken with the reference works, are presented here. Remember that you must cross multiple source information to reduce the risk of errors.