2017-05-15

Free Tool - IO Statistics Parser

To troubleshoot query performance, SET Statistics IO ON is a must in order to check the disk activity generated by the query. By the way, the text-based output of this option is not easy for human to analyze. Fortunately there's a free tool called Statistics Parser on the web, so you can copy and paste the statistics output from your SSMS to there, which gives you a better report format. E.g.

2017-04-21

Running Query Execution Statistics

SQL Server 2014 introduced a new DMV, sys.dm_exec_query_profiles, which can be used to monitors real time query progress while the query is in execution. The session to check must be either SET STATISTICS XML ON, or SET STATISTICS PROFILE ON, so most likely you will run the query to check in SSMS.
Below sql script employs this DMV to show the operations of a particular sql session, with the most expensive I/O operations shown on top:

DECLARE @session_id smallint = <spid of the query to check in SSMS>
SELECT
P.plan_handle, P.physical_operator_name, OBJECT_NAME(P.[object_id]) AS objName,
P.node_id, P.index_id, P.cpu_time_ms, P.estimate_row_count,
P.row_count, P.logical_read_count, P.elapsed_time_ms,
P.read_ahead_count, P.scan_count
FROM sys.dm_exec_query_profiles AS P
WHERE P.session_id = @session_id
ORDER BY (P.row_count-P.estimate_row_count) + P.logical_read_count DESC

Sample output:

2017-03-14

Storage Best Practice - 64KB NTFS Allocation Unit Size

According to the SQL Server Best Practices Article, it is recommended that you use a 64-KB NTFS allocation unit size for data, logs, and tempdb. Below is my experience of how this best practice improves the throughput my production system by reducing the transaction log disk write latency.

The transaction log file originally placed in a disk partition with default 4KB allocation unit size. Disk write latency often climbs up to above 10ms.

Create a new disk partition with 64-KB NTFS allocation unit size, in Administrative Tools > Computer Management > Storage > Disk Management > Format > NTFS, 64K

Double check the new partition:
  1. Open Administrator command prompt
  2. fsutil fsinfo ntfsinfo [your drive]
  3. Check "Bytes Per Cluster" value is 65536

Move transaction log file to the new partition:
/* Ref. https://technet.microsoft.com/en-us/library/gg452698.aspx */
-- Check file size
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Shrink Tran Log File to 1MB
USE YourDB
GO
DBCC SHRINKFILE(YourDB_log, TRUNCATEONLY);
GO
USE master
GO
-- Check file size
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO

-- Offline
ALTER DATABASE YourDB SET OFFLINE WITH ROLLBACK IMMEDIATE;
GO

-- *** COPY FILE TO NEW LOCATION USING WINDOWS FILE EXPLORER BY YOURSELF ***

-- Move file
USE master
GO
ALTER DATABASE YourDB MODIFY FILE (NAME = YourDB_log, FILENAME = '<*** NEW PATH ***>\YourDB_log.ldf')
GO
-- Check file location changed
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Online
USE master
GO
ALTER DATABASE YourDB SET ONLINE
GO
-- Check file again
USE master
GO
SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files WHERE DB_NAME(database_id) = 'YourDB'
GO
-- Grow Tran Log File to originally size, e.g. 10Gb
USE master
GO
ALTER DATABASE [YourDB]
MODIFY FILE ( NAME = N'YourDB_log', SIZE = 10000MB )
GO


Now the transaction log disk write latency keeps under 2ms.

2017-02-02

SQL Server 2016 Always Encrypted - Part 1

Always Encrypted is a client-side encryption technology in which data is automatically encrypted not only when it is written but also when it is read by an approved application. Unlike Transparent Data Encryption (TDE), which encrypts the data on disk but allows the data to be read by any application that queries the data, Always Encrypted requires your client application to use an Always Encrypted-enabled driver to communicate with the database. At this time, the only Always Encrypted-enabled driver is the .NET Framework Data Provider for SQL Server (a.k.a. ADO.NET), which requires installation of .NET Framework version 4.6 on the client computer.
Below is an example of how to create a table with Always Encrypted columns:
-- 1. Creating a column master key
USE TestDB
GO
CREATE COLUMN MASTER KEY TestColumnMasterKey WITH (
    KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
    KEY_PATH = N'CurrentUser/My/DE3A770F25EBD6071305B77FB198D1AE434E6014'
);
GO
-- 2. Creating a column encryption key
CREATE COLUMN ENCRYPTION KEY TestColumnEncryptKey WITH VALUES (
    COLUMN_MASTER_KEY = [TestColumnMasterKey],
    ALGORITHM = 'RSA_OAEP',
    ENCRYPTED_VALUE = 0x016E008000630075007200720065006E00740075007300650072002F006D0079002F0064006500330061003700370030006600320035006500620064003600300037003100330030003500620037003700660062003100390038006400310061006500340033003400650036003000310034004D74119935C902E59F57A96C3E6F770826D247135FFFA759B5B013DF4DAF7CFB760A5864DD8381B91924D067BE4F574B50DE7F0D53F278E1C003B5D192865B808C1590224F4A4BB463255101C36D3089F46609B376D7B00FA9F9CEAF715398EECAB790AC6EC8BD18C17B3EB992CAE08FEA6A2F5A2BDDA4F5A700744E45861F993A3C488127E5897B30892DD2734DD5D84F096882A393D5877C5A20E392888FE0357F46DB578AEB4C677CFFCE2281276C4D12F3E5AC3BCCC09B78BB0E522D86F9B2CF989F14695B7CB95A478194ECBD175B5C7C1687B7589FD9145B2782CB0BBAB6F7F5B0AC7F8C256EB0D3D87ABAE4F73137FA4AFA387B791B54AC503B53271D
);
GO
-- 3. Creating a table using Always Encrypted (DETERMINISTIC can be indexed and lookup; RANDOMIZED more secure
CREATE TABLE TestEncryptTable (
    id int IDENTITY(1, 1) PRIMARY KEY,
    encryptedLookupCol varchar(11) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
        ENCRYPTION_TYPE = DETERMINISTIC,
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
        COLUMN_ENCRYPTION_KEY = TestColumnEncryptKey) NOT NULL,
    encryptedValueOnlyCol date ENCRYPTED WITH (
        ENCRYPTION_TYPE = RANDOMIZED,
        ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
        COLUMN_ENCRYPTION_KEY = TestColumnEncryptKey) NOT NULL
);

2017-01-04

Coding Pattern of TRY CATCH and TRANSACTION inside a CURSOR

Sometimes you may need to create an sql agent job, which process sql commands inside a FIFO queue (which is implemented by an user table which an auto identity integer), and if one command raises error, log the error into an error log table, then continue the next command. Such job can be implemented by using a LOCAL FAST_FORWARD CURSOR, TRY...CATCH statement, and TRANSACTION statements. Below is an example illustrating how to do it.

-- Create Tables
CREATE TABLE QueueCommands (
    ID bigint IDENTITY(1,1) NOT NULL PRIMARY KEY,
    SqlCommand varchar(500) NOT NULL
 )
 GO
CREATE TABLE CompletedCommands (
    ID bigint NOT NULL,
    SqlCommand varchar(500) NOT NULL
)
GO
-- Error Log Table
CREATE TABLE ErrorLog(
    errorTime datetime NOT NULL CONSTRAINT DF_ErrorLog_errorTime  DEFAULT (getdate()),
    errorMsg nvarchar(2048) NULL
)
GO

-- Error Logger Stored Procedure
CREATE OR ALTER PROC error_handler_sp AS
BEGIN
   DECLARE @errmsg   nvarchar(2048),
           @severity tinyint,
           @state    tinyint,
           @errno    int,
           @proc     sysname,
           @lineno   int
          
   SELECT @errmsg = error_message(), @severity = error_severity(),
          @state  = error_state(), @errno = error_number(),
          @proc   = error_procedure(), @lineno = error_line()
      
   IF @errmsg NOT LIKE '***%'
   BEGIN
      SELECT @errmsg = '*** ' + coalesce(quotename(@proc), '<dynamic SQL>') +
                       ', Line ' + ltrim(str(@lineno)) + '. Errno ' +
                       ltrim(str(@errno)) + ': ' + @errmsg
   END

   INSERT ErrorLog (errorMsg) VALUES (@errmsg);

   RAISERROR('%s', @severity, @state, @errmsg)
END
GO

-- Create the Stored Procedure to be called by sql agent job
CREATE OR ALTER PROC JobProcessCommands
AS
BEGIN

SET XACT_ABORT ON;
SET NOCOUNT ON;

CREATE TABLE #tmpQueueCommands (
    ID bigint PRIMARY KEY,
    SqlCommand varchar(500)
);
INSERT #tmpQueueCommands (ID, SqlCommand) SELECT ID, SqlCommand FROM QueueCommands;

DECLARE @cur_ID bigint, @cur_SqlCommand varchar(500);
DECLARE cur CURSOR LOCAL FAST_FORWARD FOR
    SELECT ID, SqlCommand FROM #tmpQueueCommands ORDER BY ID
OPEN cur
WHILE 1 = 1
BEGIN
    FETCH cur INTO @cur_ID, @cur_SqlCommand
    IF @@FETCH_STATUS <> 0 BREAK
   
    BEGIN TRY
    BEGIN TRAN
   
    EXECUTE(@cur_SqlCommand);
    IF @@ERROR = 0
    BEGIN
        DELETE TOP(1) QueueCommands OUTPUT deleted.* INTO CompletedCommands WHERE ID = @cur_ID;
    END

    COMMIT
    END TRY
    BEGIN CATCH
    IF XACT_STATE() <> 0 ROLLBACK
    EXEC error_handler_sp
    END CATCH
END
CLOSE cur
DEALLOCATE cur

DROP TABLE #tmpQueueCommands;
END
GO

-- Example
INSERT QueueCommands VALUES ('SELECT 1');
INSERT QueueCommands VALUES ('SELECT 1/0');    -- Error
INSERT QueueCommands VALUES ('SELECT 2');

SELECT * FROM QueueCommands

EXEC JobProcessCommands

SELECT * FROM QueueCommands
SELECT * FROM CompletedCommands
SELECT * FROM ErrorLog

2016-12-13

Transaction Isolation Level

By default, when you open a connection to sql server, its TRANSACTION ISOLATION LEVEL is READ COMMITTED. However, client applications can change the transaction isolation level for their sql connections, such as by running SET TRANSACTION ISOLATION LEVEL statement. When diagnosing sql blocking and deadlock, you must beware of transaction isolation level not being overlooked. Below script checks the transaction isolation level of all current running processes:
SELECT
r.session_id, s.transaction_isolation_level,
CASE s.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS transaction_isolation_level_Name,
OBJECT_NAME(t.objectid, t.dbid) AS object_name,
t.[text],
SUBSTRING(
t.[text],
(r.statement_start_offset / 2) + 1,
((
CASE
r.statement_end_offset
WHEN -1
THEN DATALENGTH(t.[text])
ELSE r.statement_end_offset
END - r.statement_start_offset) / 2) + 1) AS stmt,
r.start_time, r.status, r.command, DB_NAME(r.database_id) AS db_name,
s.login_name, s.host_name, s.program_name, s.total_elapsed_time
FROM sys.dm_exec_requests r
JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) t
WHERE r.status IN ('runnable', 'suspended', 'running')


If it's set by the client application, and you are not able to change the application code, then you can override the isolation level inside the stored procedure, given that it use stored procedure and you got the permission to alter it. E.g.
-- a sample stored procedure
CREATE PROCEDURE uspTestTranIsoLvl
    @set bit
AS
BEGIN
IF @set = 1
BEGIN
    SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
END
   
SELECT session_id, transaction_isolation_level,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL
FROM sys.dm_exec_sessions WHERE session_id = @@SPID;

END
GO


-- Demonstration
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

SELECT 'Before', session_id, transaction_isolation_level,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL
FROM sys.dm_exec_sessions WHERE session_id = @@SPID;

SELECT 'NOT set';
EXEC uspTestTranIsoLvl 0;

SELECT 'SET';
EXEC uspTestTranIsoLvl 1;

SELECT 'After', session_id, transaction_isolation_level,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL
FROM sys.dm_exec_sessions WHERE session_id = @@SPID;





 

Above demonstration shows that the transaction isolation level can be overridden inside stored procedure, and back to original after the stored procedure returned.
 

2016-11-14

Increasing usability of Filtered Index

A filtered index is an optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance as well as reduce index maintenance and storage costs compared with full-table indexes.
In order to increase the chance that the query engine to employ the filtered indexes you created, make sure to include all the filtering column(s) of the index into its index key or included column list. Below example demonstrates why.


USE AdventureWorks2008
GO
-- Filtered Column in index key
DROP INDEX [Sales].[SalesOrderDetail].IX_1
GO
CREATE INDEX IX_1 ON [Sales].[SalesOrderDetail] (ProductID, SalesOrderID, OrderQty) WHERE OrderQty >= 2;
GO
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 2        -- Use Filtered Index
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 3        -- Use Filtered Index

-- Filtered Column NOT in index
DROP INDEX [Sales].[SalesOrderDetail].IX_1
GO
CREATE INDEX IX_1 ON [Sales].[SalesOrderDetail] (ProductID, SalesOrderID) WHERE OrderQty >= 2;
GO
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 2        -- Use
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 3        -- NOT use!

-- Filtered Column in INCLUDED
DROP INDEX [Sales].[SalesOrderDetail].IX_1
GO
CREATE INDEX IX_1 ON [Sales].[SalesOrderDetail] (ProductID, SalesOrderID) INCLUDE (OrderQty) WHERE OrderQty >= 2;
GO
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 2        -- Use
SELECT ProductID, SalesOrderID FROM [Sales].[SalesOrderDetail] WHERE OrderQty >= 3        -- Use

By the way, the query optimizer won’t consider filtered indexes if you’re using local variables or parameterized SQL for the predicate that matches the filter. Dynamic SQL can help.