2017-12-20

Troubleshoot SQL Server Disk I/O Slowness

Last week, as the storage device of my production sql server had a hardware component failure, I moved the production database files into a new SAN storage. But after that, some users complain that the system occasionally hang, so I firstly check anything abnormal on the disk I/O performance. There were some errors logged in the sql server error log, saying "SQL Server has encountered XX occurrence(s) of I/O requests taking longer than 15 seconds to complete on the file (...\MyUserDB.mdf) in database [MyUserDB]".
I'm a lucky guy, my company bought me SQL Sentry, which I found it is the best performance monitor for sql server. It shows me that during that time period, the database data file had a high disk read latency (>2,000ms during that period, comparing to normally within 5ms).
Then I tried to find any error on the windows system log, and found an iSCSI error.
Finally, I called my storage hardware vendor to fix it. Meanwhile, I also double checked the antivirus scan excludes SQL Server related files and directory, as listed in this knowledge base. Also, make sure no any disk defragmentation scheduled task on any database disk drives.

2017-11-15

Moving Database with Mirroring and Log Shipping

For disk maintenance or relocation reason, sometimes you need to move user database data and log files to a new disk location. Below are the procedures to move a primary database with Mirroring and Log Shipping enabled:

1. On the new location folder, grant enough access permission to the SQL Server service account.

2. Disable the Log Backup Agent Job for the database in primary server, the agent job name should be LSBackup_<DBName>.

3. Wait for the log backup agent job completion if it's running, in order to prevent any transaction log not yet transferred to the mirroring secondary database but accidentally being truncated in the primary database (otherwise the mirroring secondary database must be reinitialized by a full backup later on).

4. Drop mirroring by sql statement ALTER DATABASE DBName SET PARTNER OFF;

5. Move data and log files:
USE master;
GO
-- Return the logical file name.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'DBName');
GO
-- Offline
ALTER DATABASE
DBName SET OFFLINE WITH ROLLBACK IMMEDIATE;
GO
-- Physically move the file to a new location.
-- In the following statement, modify the path specified in FILENAME to
-- the new location of the file on your server.
-- *** Make sure the SQL Service Account has security right to access the new directory ***
ALTER DATABASE
DBName
    MODIFY FILE ( NAME =
DB_Data_LogicalName,
    FILENAME = 'New Location\DB_Data.mdf');
GO
ALTER DATABASE
DBName
    MODIFY FILE ( NAME =
DB_Log_LogicalName,
    FILENAME = '
New Location\DB_Log.ldf');
GO
-- Online
ALTER DATABASE
DBName SET ONLINE;
GO
--Verify the new location.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'
DBName');

6. Re-Create Mirroring (for example mirroring endpoint port number 5022):
  a) On Standby: ALTER DATABASE DBName SET PARTNER = 'TCP://PrimaryServer:5022';
  b) On Primary: ALTER DATABASE
DBName SET PARTNER = 'TCP://StandbyServer:5022';

7. Re-Enable the Log Backup Agent Job.

8. Check Mirroring and LogShipping resumed
.

2017-10-17

Detecting Lock Escalation

To detect lock escalation in SQL Server, you can use SQL Profiler or SQL Trace to capture Lock:Escalation event. But this blog post wanna shows you another way, by comparing the 'Table Lock Escalations/sec' counter value in sys.dm_os_performance_counters DMV, before and after your query execution. Demo as below:
SET NOCOUNT ON;
-- create testing table and add 10,000 rows
IF EXISTS (SELECT 1 FROM sys.objects WHERE [type] = 'U' AND name = 'T_Escalation') DROP TABLE T_Escalation;
GO
CREATE TABLE T_Escalation (pk int PRIMARY KEY, col varchar(50));
GO
DECLARE @i int = 0;
WHILE @i < 10000
BEGIN
    INSERT T_Escalation (pk, col) VALUES (@i, 'aaa');
    SET @i+=1;
END
GO

DECLARE @v1 int, @v2 int;

-- get "lock escalation" counter before statement execution
SET @v1 = (SELECT TOP 1 cntr_value FROM sys.dm_os_performance_counters WHERE counter_name = 'Table Lock Escalations/sec');
UPDATE T_Escalation SET col = 'XXX' WHERE pk <= 1000;
SELECT @@ROWCOUNT AS [updated rows];
-- get it again after statement executed
SET @v2 = (SELECT TOP 1 cntr_value FROM sys.dm_os_performance_counters WHERE counter_name = 'Table Lock Escalations/sec');
SELECT @v2 - @v1 AS [Table Lock Escalation];

-- get "lock escalation" counter before statement execution
SET @v1 = (SELECT TOP 1 cntr_value FROM sys.dm_os_performance_counters WHERE counter_name = 'Table Lock Escalations/sec');
UPDATE T_Escalation SET col = 'XXX' WHERE pk <= 7000;
SELECT @@ROWCOUNT AS [updated rows];
-- get it again after statement executed
SET @v2 = (SELECT TOP 1 cntr_value FROM sys.dm_os_performance_counters WHERE counter_name = 'Table Lock Escalations/sec');
SELECT @v2 - @v1 AS [Table Lock Escalation];

2017-09-14

Temporal Table

SQL Server 2016 introduces system-versioned temporal tables, which provides information about data stored in the table at any point in time rather than only the data that is correct at the current moment in time. Temporal table allows you to query updated and deleted data, while a normal table can only return the current data. Temporal table can be used to implement data audit, repairing table data logical corruption made by program bug or human error, and slowly-changing dimensions. Below is an example of creating a temporal table and how to query it in order to perform time-based analysis.

USE [TestDB]
GO

-- Create Table (PK is a must)
CREATE TABLE [dbo].[ProductInventory](
    [ProductID] [int] NOT NULL,
    [LocationID] [smallint] NOT NULL,
    [Shelf] [nvarchar](10) NOT NULL,
    [Bin] [tinyint] NOT NULL,
    [Quantity] [smallint] NOT NULL,
    [ModifiedDate] [datetime] NOT NULL,
 CONSTRAINT [PK_ProductInventory_ProductID_LocationID] PRIMARY KEY CLUSTERED
(
    [ProductID] ASC,
    [LocationID] ASC
))
GO

-- Add PERIOD datetime2 columns, non-nullable, optionally HIDDEN them from SELECT * statement
ALTER TABLE dbo.[ProductInventory] ADD
ValidFrom datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN CONSTRAINT DF_ValidFrom DEFAULT CONVERT(datetime2 (0), '2008-03-31 00:00:00') ,
ValidTo datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN CONSTRAINT DF_ValidTo DEFAULT CONVERT(datetime2 (0), '9999-12-31 23:59:59'),
PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);
GO

-- mark the table as Temporal table, and specify the History Table name
ALTER TABLE dbo.[ProductInventory] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.[ProductInventory_History]));
GO

-- Testing data manipulations
INSERT [dbo].[ProductInventory] (
ProductID,
LocationID,
Shelf,
Bin,
Quantity,
ModifiedDate)
VALUES (
1, 1, 'A', 1, 300, '20160229 16:10:29.303'
);
WAITFOR DELAY '00:00:05'
INSERT [dbo].[ProductInventory] (
ProductID,
LocationID,
Shelf,
Bin,
Quantity,
ModifiedDate)
VALUES (
2, 2, 'B', 2, 6000, '20160229 20:00:00'
);
UPDATE [ProductInventory] SET Quantity = 8 WHERE ProductID = 1
UPDATE [ProductInventory] SET Quantity = 10 WHERE ProductID = 2
DELETE [ProductInventory] WHERE ProductID = 2

-- query
SELECT CAST(SYSUTCDATETIME() AS datetime2(0))
SELECT * FROM [dbo].[ProductInventory]
SELECT * FROM [dbo].[ProductInventory_History]
SELECT *, ValidFrom, ValidTo FROM [dbo].[ProductInventory]
SELECT *, ValidFrom, ValidTo FROM [dbo].[ProductInventory_History]

-- time-based query
SELECT * FROM ProductInventory FOR SYSTEM_TIME AS OF '20170206 10:00';
SELECT * FROM ProductInventory FOR SYSTEM_TIME FROM '20170106 10:00' TO '20170801 00:00';
SELECT * FROM ProductInventory FOR SYSTEM_TIME BETWEEN '20170106 10:00' AND '20170801 00:00';
SELECT * FROM ProductInventory FOR SYSTEM_TIME CONTAINED IN ('20170106 10:00', '20170801 00:00');
SELECT * FROM ProductInventory FOR SYSTEM_TIME ALL;

2017-08-15

How to Prevent Lock Escalation during Large Data Import

SQL Server employs a mechanism called Lock Escalation, which converts many fine-grain locks into fewer coarse-grain locks, in order to reduce system overhead (mainly memory) while increasing the probability of concurrency contention. During a large data import /update, it can easily trigger lock escalation to table level, and so your whole table is locked by either a Shared or Exclusive Table Lock. You can temporarily disable lock escalation on a table by using the ALTER TABLE SET LOCK_ESCALATION option. READ_COMMITTED_SNAPSHOT isolation (RCSI) can reduce locking too, as it does not generate any shared locks for read operations.
You can also import/update table data in batches, making the number of rows being updated per batch is lower than the Lock Escalation Thresholds (A single sql statement acquires 5,000 locks on a table/index. And if locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired).
Below is an example of how to update all rows in a table in batches of 1,000 rows, where the ID column is an increasing value:
SET NOCOUNT ON;
DECLARE @totalCount int = (SELECT COUNT(*) FROM Table1);
PRINT '@totalCount=' + CAST(@totalCount AS varchar(10));
DECLARE @proceedID int = NULL;
DECLARE @MaxID int = (SELECT TOP 1 ID FROM Table1 ORDER BY ID DESC);
DECLARE @proceedTblVar table (ID int);
SET NOCOUNT OFF;
WHILE @proceedID IS NULL OR @proceedID < @MaxID
BEGIN
    UPDATE TOP (1000) Table1 SET Column2 = 9999 OUTPUT inserted.ID INTO @proceedTblVar WHERE ID > ISNULL(@proceedID, 0);
    SET @proceedID =(SELECT TOP 1 ID FROM @proceedTblVar ORDER BY ID DESC);
END;

Result as below:

2017-07-17

Table Variable vs Temporary Table

There's a subtle difference between table variable and temporary table, but it's quite critical for sql developer. In a transaction, i.e. statements between BEGIN TRAN and COMMIT/ROLLBACK, while temporary tables will be affected by rollback, table variables are NOT part of transaction. E.g.
CREATE TABLE #t (i int);
DECLARE @t table (i int);
BEGIN TRAN
INSERT @t VALUES (1);
INSERT #t VALUES (1);
SELECT 'Before Rollback #Tmp', * FROM #t;
SELECT 'Before Rollback @var', * FROM @t;
ROLLBACK
SELECT 'After Rollback #Tmp', * FROM #t;
SELECT 'After Rollback @var', * FROM @t;
DROP TABLE #t;

Result:
 Below are their differences provided by Microsoft:

2017-06-08

Number of Rows Read in Actual Execution Plan

There's a new piece of diagnostic information in SQL 2012 SP3, SQL 2014 SP2, and SQL 2016, Number of Rows Read, inside the actual execution plan.
Before these releases, we can only know the number of rows returned from an operator, but we cannot know the actual number of rows that the operator processed/read. In case of "residual predicate pushdown", there's a hidden filter, aka Predicate or Residual Predicate which being evaluated for each row output by the table/index scan operator. In this case, the number of rows returned and the number of rows that the operator processed/read can have a big difference.
Let's see an example as below:

SET NOCOUNT ON;
GO

IF OBJECT_ID(N'ActualRowTbl') IS NOT NULL
    DROP TABLE ActualRowTbl;
GO

CREATE TABLE ActualRowTbl (
    id INT IDENTITY NOT NULL PRIMARY KEY,
    firstName varchar(100) DEFAULT ('Ken'),
    lastName varchar(100) DEFAULT ('Lam')
)
GO

INSERT ActualRowTbl DEFAULT VALUES;
GO 999

INSERT ActualRowTbl VALUES ('Jeremy','Po');
GO

CREATE INDEX IX_ActualRowTbl_1 ON ActualRowTbl (firstName, lastName);
GO

SELECT firstName FROM ActualRowTbl WHERE lastName = 'Po'
GO


By checking the actual execution plan of the last select statement, you can see there's a big difference between the number of rows returned (Actual Number of Rows) and Number of Rows Read, inside the Index Scan operator.

2017-05-15

Free Tool - IO Statistics Parser

To troubleshoot query performance, SET Statistics IO ON is a must in order to check the disk activity generated by the query. By the way, the text-based output of this option is not easy for human to analyze. Fortunately there's a free tool called Statistics Parser on the web, so you can copy and paste the statistics output from your SSMS to there, which gives you a better report format. E.g.

2017-04-21

Running Query Execution Statistics

SQL Server 2014 introduced a new DMV, sys.dm_exec_query_profiles, which can be used to monitors real time query progress while the query is in execution. The session to check must be either SET STATISTICS XML ON, or SET STATISTICS PROFILE ON, so most likely you will run the query to check in SSMS.
Below sql script employs this DMV to show the operations of a particular sql session, with the most expensive I/O operations shown on top:

DECLARE @session_id smallint = <spid of the query to check in SSMS>
SELECT
P.plan_handle, P.physical_operator_name, OBJECT_NAME(P.[object_id]) AS objName,
P.node_id, P.index_id, P.cpu_time_ms, P.estimate_row_count,
P.row_count, P.logical_read_count, P.elapsed_time_ms,
P.read_ahead_count, P.scan_count
FROM sys.dm_exec_query_profiles AS P
WHERE P.session_id = @session_id
ORDER BY (P.row_count-P.estimate_row_count) + P.logical_read_count DESC

Sample output:

2017-03-14

Storage Best Practice - 64KB NTFS Allocation Unit Size

According to the SQL Server Best Practices Article, it is recommended that you use a 64-KB NTFS allocation unit size for data, logs, and tempdb. Below is my experience of how this best practice improves the throughput my production system by reducing the transaction log disk write latency.

The transaction log file originally placed in a disk partition with default 4KB allocation unit size. Disk write latency often climbs up to above 10ms.

Create a new disk partition with 64-KB NTFS allocation unit size, in Administrative Tools > Computer Management > Storage > Disk Management > Format > NTFS, 64K

Double check the new partition:
  1. Open Administrator command prompt
  2. fsutil fsinfo ntfsinfo [your drive]
  3. Check "Bytes Per Cluster" value is 65536

Move transaction log file to the new partition:
/* Ref. https://technet.microsoft.com/en-us/library/gg452698.aspx */
-- Check file size
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Shrink Tran Log File to 1MB
USE YourDB
GO
DBCC SHRINKFILE(YourDB_log, TRUNCATEONLY);
GO
USE master
GO
-- Check file size
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO

-- Offline
ALTER DATABASE YourDB SET OFFLINE WITH ROLLBACK IMMEDIATE;
GO

-- *** COPY FILE TO NEW LOCATION USING WINDOWS FILE EXPLORER BY YOURSELF ***

-- Move file
USE master
GO
ALTER DATABASE YourDB MODIFY FILE (NAME = YourDB_log, FILENAME = '<*** NEW PATH ***>\YourDB_log.ldf')
GO
-- Check file location changed
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Online
USE master
GO
ALTER DATABASE YourDB SET ONLINE
GO
-- Check file again
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Grow Tran Log File to originally size, e.g. 10Gb
USE master
GO
ALTER DATABASE [YourDB]
MODIFY FILE ( NAME = N'YourDB_log', SIZE = 10000MB )
GO


Now the transaction log disk write latency keeps under 2ms.

2017-02-02

SQL Server 2016 Always Encrypted - Part 1

Always Encrypted is a client-side encryption technology in which data is automatically encrypted not only when it is written but also when it is read by an approved application. Unlike Transparent Data Encryption (TDE), which encrypts the data on disk but allows the data to be read by any application that queries the data, Always Encrypted requires your client application to use an Always Encrypted-enabled driver to communicate with the database. At this time, the only Always Encrypted-enabled driver is the .NET Framework Data Provider for SQL Server (a.k.a. ADO.NET), which requires installation of .NET Framework version 4.6 on the client computer.
Below is an example of how to create a table with Always Encrypted columns:
-- 1. Creating a column master key
USE TestDB
GO
CREATE COLUMN MASTER KEY TestColumnMasterKey WITH (
    KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
    KEY_PATH = N'CurrentUser/My/DE3A770F25EBD6071305B77FB198D1AE434E6014'
);
GO
-- 2. Creating a column encryption key
CREATE COLUMN ENCRYPTION KEY TestColumnEncryptKey WITH VALUES (
    COLUMN_MASTER_KEY = [TestColumnMasterKey],
    ALGORITHM = 'RSA_OAEP',
    ENCRYPTED_VALUE = 0x016E008000630075007200720065006E00740075007300650072002F006D0079002F0064006500330061003700370030006600320035006500620064003600300037003100330030003500620037003700660062003100390038006400310061006500340033003400650036003000310034004D74119935C902E59F57A96C3E6F770826D247135FFFA759B5B013DF4DAF7CFB760A5864DD8381B91924D067BE4F574B50DE7F0D53F278E1C003B5D192865B808C1590224F4A4BB463255101C36D3089F46609B376D7B00FA9F9CEAF715398EECAB790AC6EC8BD18C17B3EB992CAE08FEA6A2F5A2BDDA4F5A700744E45861F993A3C488127E5897B30892DD2734DD5D84F096882A393D5877C5A20E392888FE0357F46DB578AEB4C677CFFCE2281276C4D12F3E5AC3BCCC09B78BB0E522D86F9B2CF989F14695B7CB95A478194ECBD175B5C7C1687B7589FD9145B2782CB0BBAB6F7F5B0AC7F8C256EB0D3D87ABAE4F73137FA4AFA387B791B54AC503B53271D
);
GO
-- 3. Creating a table using Always Encrypted (DETERMINISTIC can be indexed and lookup; RANDOMIZED more secure
CREATE TABLE TestEncryptTable (
    id int IDENTITY(1, 1) PRIMARY KEY,
    encryptedLookupCol varchar(11) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
        ENCRYPTION_TYPE = DETERMINISTIC,
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
        COLUMN_ENCRYPTION_KEY = TestColumnEncryptKey) NOT NULL,
    encryptedValueOnlyCol date ENCRYPTED WITH (
        ENCRYPTION_TYPE = RANDOMIZED,
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
        COLUMN_ENCRYPTION_KEY = TestColumnEncryptKey) NOT NULL
);

2017-01-04

Coding Pattern of TRY CATCH and TRANSACTION inside a CURSOR

Sometimes you may need to create an sql agent job, which process sql commands inside a FIFO queue (which is implemented by an user table which an auto identity integer), and if one command raises error, log the error into an error log table, then continue the next command. Such job can be implemented by using a LOCAL FAST_FORWARD CURSOR, TRY...CATCH statement, and TRANSACTION statements. Below is an example illustrating how to do it.

-- Create Tables
CREATE TABLE QueueCommands (
    ID bigint IDENTITY(1,1) NOT NULL PRIMARY KEY,
    SqlCommand varchar(500) NOT NULL
 )
 GO
CREATE TABLE CompletedCommands (
    ID bigint NOT NULL,
    SqlCommand varchar(500) NOT NULL
)
GO
-- Error Log Table
CREATE TABLE ErrorLog(
    errorTime datetime NOT NULL CONSTRAINT DF_ErrorLog_errorTime  DEFAULT (getdate()),
    errorMsg nvarchar(2048) NULL
)
GO

-- Error Logger Stored Procedure
CREATE OR ALTER PROC error_handler_sp AS
BEGIN
   DECLARE @errmsg   nvarchar(2048),
           @severity tinyint,
           @state    tinyint,
           @errno    int,
           @proc     sysname,
           @lineno   int
          
   SELECT @errmsg = error_message(), @severity = error_severity(),
          @state  = error_state(), @errno = error_number(),
          @proc   = error_procedure(), @lineno = error_line()
      
   IF @errmsg NOT LIKE '***%'
   BEGIN
      SELECT @errmsg = '*** ' + coalesce(quotename(@proc), '<dynamic SQL>') +
                       ', Line ' + ltrim(str(@lineno)) + '. Errno ' +
                       ltrim(str(@errno)) + ': ' + @errmsg
   END

   INSERT ErrorLog (errorMsg) VALUES (@errmsg);

   RAISERROR('%s', @severity, @state, @errmsg)
END
GO

-- Create the Stored Procedure to be called by sql agent job
CREATE OR ALTER PROC JobProcessCommands
AS
BEGIN

SET XACT_ABORT ON;
SET NOCOUNT ON;

CREATE TABLE #tmpQueueCommands (
    ID bigint PRIMARY KEY,
    SqlCommand varchar(500)
);
INSERT #tmpQueueCommands (ID, SqlCommand) SELECT ID, SqlCommand FROM QueueCommands;

DECLARE @cur_ID bigint, @cur_SqlCommand varchar(500);
DECLARE cur CURSOR LOCAL FAST_FORWARD FOR
    SELECT ID, SqlCommand FROM #tmpQueueCommands ORDER BY ID
OPEN cur
WHILE 1 = 1
BEGIN
    FETCH cur INTO @cur_ID, @cur_SqlCommand
    IF @@FETCH_STATUS <> 0 BREAK
   
    BEGIN TRY
    BEGIN TRAN
   
    EXECUTE(@cur_SqlCommand);
    IF @@ERROR = 0
    BEGIN
        DELETE TOP(1) QueueCommands OUTPUT deleted.* INTO CompletedCommands WHERE ID = @cur_ID;
    END

    COMMIT
    END TRY
    BEGIN CATCH
    IF XACT_STATE() <> 0 ROLLBACK
    EXEC error_handler_sp
    END CATCH
END
CLOSE cur
DEALLOCATE cur

DROP TABLE #tmpQueueCommands;
END
GO

-- Example
INSERT QueueCommands VALUES ('SELECT 1');
INSERT QueueCommands VALUES ('SELECT 1/0');    -- Error
INSERT QueueCommands VALUES ('SELECT 2');

SELECT * FROM QueueCommands

EXEC JobProcessCommands

SELECT * FROM QueueCommands
SELECT * FROM CompletedCommands
SELECT * FROM ErrorLog