2017-08-15

How to Prevent Lock Escalation during Large Data Import

SQL Server employs a mechanism called Lock Escalation, which converts many fine-grain locks into fewer coarse-grain locks, in order to reduce system overhead (mainly memory) while increasing the probability of concurrency contention. During a large data import /update, it can easily trigger lock escalation to table level, and so your whole table is locked by either a Shared or Exclusive Table Lock. You can temporarily disable lock escalation on a table by using the ALTER TABLE SET LOCK_ESCALATION option. READ_COMMITTED_SNAPSHOT isolation (RCSI) can reduce locking too, as it does not generate any shared locks for read operations.
You can also import/update table data in batches, making the number of rows being updated per batch is lower than the Lock Escalation Thresholds (A single sql statement acquires 5,000 locks on a table/index. And if locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired).
Below is an example of how to update all rows in a table in batches of 1,000 rows, where the ID column is an increasing value:
SET NOCOUNT ON;
DECLARE @totalCount int = (SELECT COUNT(*) FROM Table1);
PRINT '@totalCount=' + CAST(@totalCount AS varchar(10));
DECLARE @proceedID int = NULL;
DECLARE @MaxID int = (SELECT TOP 1 ID FROM Table1 ORDER BY ID DESC);
DECLARE @proceedTblVar table (ID int);
SET NOCOUNT OFF;
WHILE @proceedID IS NULL OR @proceedID < @MaxID
BEGIN
    UPDATE TOP (1000) Table1 SET Column2 = 9999 OUTPUT inserted.ID INTO @proceedTblVar WHERE ID > ISNULL(@proceedID, 0);
    SET @proceedID =(SELECT TOP 1 ID FROM @proceedTblVar ORDER BY ID DESC);
END;

Result as below:

No comments:

Post a Comment