2015-12-10

Speedup Query by Indexed, Persisted, Computed Column

Some predicates on table columns in a query for result filtering (WHERE clause) or joining (JOIN... ON clause) cannot be easily resolved back into the form of (raw column = 'xxx'), which hinders the query able to be covered by an index seek operator. For example, WHERE ISNULL(col1, 0) = @var, this predicate cannot be simply resolved, except WHERE (col1 = @var OR (col1 IS NULL AND @var = 0), but such statement won't have good performance.
In order to solve this problem, we can add a PERSISTED COMPUTED column on that table, e.g. ADD col1NullToZero AS ISNULL(col1, 0) PERSISTED. By making the computed column as persisted, we can create indexes on it (ref. Creating Indexes on Persisted Computed Columns). Then you can have an index able to cover the query. Surprise that even the query doesn't directly specify the computed column in its predicate, SQL Server still able to discover the benefit of using the index on the computed column.
Let's see an example below:

1. Create a table, called [MainTran], with two columns: TranID as its primary key, and a Nullable column DepositID that has an index.
CREATE TABLE [MainTran] (
    [TranID] [int] NOT NULL PRIMARY KEY,
    [DepositID] [int] NULL
)
GO
CREATE NONCLUSTERED INDEX [IX_MainTran_DepositID] ON [MainTran] (
    [DepositID]
)
GO

2. Populate some rows into it. Some rows with concrete DepositID values, some DepositID are 0, some DepositID are NULLs.

3. The following query can be fulfilled by index seek, but the result is not correct:

4. The following query result is correct, but it's scan the whole table:

5. The following query gets more complicated execution plan and worse performance:

6. Create a PERSISTED COMPUTED column based on that filtering column, and add an index on it:
ALTER TABLE dbo.MainTran ADD DepositIdNullToZero AS ISNULL(DepositID, 0) PERSISTED
GO
CREATE INDEX IX_MainTran_DepositIdNullToZero ON MainTran (DepositIdNullToZero);
GO

7. Using the Computed column on the query, now it uses seek on the new index:

8. Even the query is using the original expression predicate, SQL Server still able to use the new index on the computed column!

2015-11-13

Recover Log Shipping Secondary DB from status Suspect

Sometimes your Log Shipping Secondary database will be go into "Suspect" mode, due to various reasons including log shipping Restore Job failed, log backup file corrupted, and transaction log backup files grown too large due to maintenance task on primary database such as index rebuild. You can recover the secondary database by restoring a recently Full/Differential database Backup WITH NORECOVERY option. Below are the steps:
  1. Make a Full/Differential database Backup from the Primary database;
  2. Copy this database backup to the Secondary server;
  3. Restore the Secondary database by this backup WITH NORECOVERY;
  4. Start the Log Shipping Restore Job on Secondary server;
  5. Check the Job History of the Log Shipping Restore Job, you can see it skips all the log backup files with LSN before the database backup you just restored;
  6. If the Log Shipping Copy Job is still running, eventually the Restore Job will find the log backup files with LSN after the database backup;
  7. At last the secondary database catch up and resumes.

2015-11-03

Checking Log Shipping Performance

Below SQL script can be used to check the performance of Log Shipping, including the log backup size and time of delay between the primary database and secondary database.

DECLARE @dbname sysname, @days int
SET @dbname = 'SalonWebDB'
SET @days = -14 --previous number of days, script will default to 30
SELECT
 rsh.destination_database_name AS [Database],
 CASE WHEN rsh.restore_type = 'D' THEN 'Database'
  WHEN rsh.restore_type = 'F' THEN 'File'
  WHEN rsh.restore_type = 'G' THEN 'Filegroup'
  WHEN rsh.restore_type = 'I' THEN 'Differential'
  WHEN rsh.restore_type = 'L' THEN 'Log'
  WHEN rsh.restore_type = 'V' THEN 'Verifyonly'
  WHEN rsh.restore_type = 'R' THEN 'Revert'
  ELSE rsh.restore_type
 END AS [Restore Type],
 rsh.restore_date AS [Restore Date],
 bmf.physical_device_name AS [Restored From],
 bs.backup_start_date,
 bs.backup_finish_date,
 CAST(bs.backup_size / 1024.0 / 1024.0 AS decimal(19, 2)) AS [Backup Size MB],
 DATEDIFF(second, bs.backup_finish_date, rsh.restore_date) AS [Delay in sec.]
FROM msdb.dbo.restorehistory rsh
 INNER JOIN msdb.dbo.backupset bs ON rsh.backup_set_id = bs.backup_set_id
 INNER JOIN msdb.dbo.restorefile rf ON rsh.restore_history_id = rf.restore_history_id
 INNER JOIN msdb.dbo.backupmediafamily bmf ON bmf.media_set_id = bs.media_set_id
WHERE rsh.restore_date >= DATEADD(dd, ISNULL(@days, -30), GETDATE()) --want to search for previous days
AND destination_database_name = ISNULL(@dbname, destination_database_name) --if no dbname, then return all
AND rsh.restore_type = 'L'    -- Log
ORDER BY rsh.restore_history_id

Sample result:


Using the query result, you can plot a graph to see the trends of size and delay.

2015-10-09

Offload Readonly query to Log Shipping Secondary server

You can reduce the load on your primary/production server by using a Log Shipping secondary server for read-only query. To do this, the secondary database must be in STANDBY mode. You will not be able to run queries if the database is in NORECOVERY mode. When you configure log shipping secondary server using SSMS, on the "Restore Transaction Log" tab, choose the "Standby mode" option. You can also decide whether to disconnect users from the secondary database while the restore operation is underway.

After the log shipping secondary server setup is completed, you can check that the secondary database is in standby/readonly mode:

Now you are able to run readonly query on the secondary database:

If you try modifying data on the secondary database, it will be fail:

If you did NOT enabled the "Disconnect users in the database when restoring backups" option, then the Restore Job cannot restore transaction log backups to the secondary database if there are users connected to that database. Transaction log backups will accumulate until there are no user connections to the database.

After all the connections to the secondary database are disconnected, the restore job will be resumed.

Any data modifications in the primary database will be transferred to the secondary database.

Trick: in order to prevent the readonly query blocks the restore job, you can specify the connection to USE another database, e.g. master, and query the secondary database tables using three-part-name.

2015-09-20

Check the Progress of Shrink Database

You can use SSMS GUI or DBCC SHRINK command to shrink the size of the data and log files in a specified database. By the way, both the SSMS GUI and DBCC command do NOT tell you the progress of it. You can consult the sys.dm_exec_requests DMV's percent_complete column to check the progress as demonstrated below:

SELECT percent_complete, start_time, [status], command, estimated_completion_time, cpu_time, total_elapsed_time FROM sys.dm_exec_requests WHERE percent_complete > 0
 
You should look for DbccFilesCompact in the command column.

2015-08-26

View Running SQL Statement, inside current processing Batches and Stored Procedures

Below SQL script can be used to view the current running (running, in runnable queue, rolling-back, and waiting for resources like locks) SQL statement, inside every current processing batches and stored procedures. Noted that the [text255] column represents the whole batch/stored procedure, but only shows the first 255 characters. Also, stored procedures are represented as "CREATE PROCEDURE" statement, it's just the definition, does NOT mean it's creating it.
This script is very useful for checking the SQL statement being stuck inside a batch and stored procedure.

SELECT
CASE WHEN (SELECT COUNT(*) FROM sys.sysprocesses WHERE spid = r.spid) > 1 THEN 'Multithread' ELSE '' END AS Multithread,
LEFT(t.[text], 255) AS [text255],
SUBSTRING(
t.[text],
(r.stmt_start / 2) + 1,
((
CASE
r.stmt_end
WHEN -1
THEN DATALENGTH(t.[text])
ELSE r.stmt_end
END - r.stmt_start) / 2) + 1) AS stmt,
DB_NAME(t.dbid) AS ObjectDB,
OBJECT_NAME(t.objectid, t.dbid) AS Object,
r.spid, r.ecid, r.blocked, r.waittime, r.lastwaittype, r.waitresource, DB_NAME(r.dbid) AS connectDB, r.cpu, r.physical_io, r.memusage, r.login_time,
r.last_batch, r.open_tran, r.status, r.hostname, r.program_name, r.loginame, r.cmd, r.net_library, r.login_time, r.stmt_start, r.stmt_end
FROM sys.sysprocesses AS r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS t
WHERE r.status IN ('runnable', 'suspended', 'running', 'rollback', 'pending', 'spinloop')
ORDER BY spid, ecid



2015-08-13

Fixing SQL Server Login SID by mapping it to Database User

When you restore a database from another server, especially for setting Availability Group, Database Mirroring, and Log Shipping, the client application may not able to use database and getting login failed for user after failover. It's due to a well known problem called "SID mapping" between the SQL Server Login and the Database User. If you recreate a SQL Server login (ie: not a Windows one), by default you get a new security ID (SID), even though you have the same user name and password. But the SID in the Database User is brought from the original (primary) database, which only linked to the SID of the corresponding SQL Server Login in the original (primary) server.

In order to fix this problem permanently, you should create the SQL Server Logins in secondary server by explicitly specifying the SID, e.g.

1. In primary database, get the SID of the database user:
USE LoginTest;
GO
SELECT sid FROM sysusers WHERE name = 'GregTest';

2. In the secondary server, after you restored the database into it, you can take a look in the user mapping and see that the Database User is NOT linked to any Login:
USE LoginTest
GO
EXEC sp_change_users_login 'Report';

3. You should fix it by creating a new SQL Server Login in the secondary server with the same SID:
CREATE LOGIN GregTest WITH PASSWORD = 'P@ssw0rd', SID = 0x2261C43EFD53F240AA989A8FB9E084DC

4. Check again the SID mapping, you should see no missing anymore:

2015-08-11

Checking Stored Procedure Performance Statistics

We can use the sys.dm_exec_procedure_stats DMV to check the last execution time, elapsed time, and other performance figures of stored procedures exist in the plan cache. Below is the SQL statement:

SELECT
DB_NAME(database_id) AS DB,
OBJECT_NAME(object_id, database_id)AS [OBJECT],
*
FROM sys.dm_exec_procedure_stats

2015-07-26

@@TRANCOUNT = 2 During DML statement Executing

In SQL Server BOL, it said @@TRANCOUNT returns the number of BEGIN TRANSACTION statements that have occurred on the current connection. Also, it said the open_tran column of sys.sysprocesses returns the number of open transactions for the process. So you may intuitively think that if that transaction count value of a process is greater than 1, it should be an explicit nested transaction.
However, explicit nested transactions are not the only reason why the transaction count can be greater than 1 during execution of a DML statement. Consider the following example:

IF EXISTS (SELECT 1 FROM sys.objects WHERE [type] = 'U' AND name = 'T1') DROP TABLE T1;
GO
CREATE TABLE T1
(
Col1 int
);
GO

-- INSERT
INSERT INTO T1
--SELECT @@TRANCOUNT;
SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID;
-- returns 2
SELECT Col1 FROM T1;

-- UPDATE
UPDATE T1 SET
Col1 = --@@TRANCOUNT
(SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID)
WHERE Col1 = 2;
-- returns 2
SELECT Col1 FROM T1;

-- DELETE
DELETE
FROM T1
WHERE Col1 = --@@TRANCOUNT
(SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID)
-- returns nothing (as @@TRANCOUNT / sys.sysprocesses open_tran = 2)
SELECT Col1 FROM T1;

-- INSERT within Explicit Transaction
BEGIN TRAN
INSERT INTO T1
--SELECT @@TRANCOUNT;
SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID;
COMMIT
-- Still returns 2
SELECT Col1 FROM T1;

-- UPDATE within Explicit Transaction
BEGIN TRAN
UPDATE T1 SET
Col1 = --@@TRANCOUNT
(SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID)
WHERE Col1 = 2;
COMMIT
-- Still returns 2
SELECT Col1 FROM T1;

-- DELETE within Explicit Transaction
BEGIN TRAN
DELETE
FROM T1
WHERE Col1 = --@@TRANCOUNT
(SELECT open_tran FROM sys.sysprocesses WHERE spid = @@SPID)
COMMIT
-- returns nothing (as @@TRANCOUNT / sys.sysprocesses open_tran = 2)
SELECT Col1 FROM T1;

This shows that during execution of a DML statement, there will be more than one opened transactions reported. The results are the same for @@TRANCOUNT, open_tran column from sys.sysprocesses, or the open_transaction_count column from the sys.dm_exec_requests.
Effectively, in addition to the one transaction always associated with any DML statement, there is another nested transaction opened internally by SQL Server, lasting for the duration of DML statement’s execution. To be clear, the second transaction is open only while a DML statement is executing. Such situation can be easily observed especially during Process Blocking.

2015-07-15

CHECKDB on an Explicit Snapshot

Beginning with SQL 2005, DBCC CHECKDB creates a hidden snapshot on the same volume as the database – you have no control over where it’s placed. If you’re running CHECKDB at the same time that your server has a heavy workload the snapshot can run out of space and you’ll get an error showing that CHECKDB didn’t complete. In order to overcome this problem, you can create your own database snapshot on a drive that has enough space and run CHECKDB against that snapshot. CHECKDB will know that its running against a snapshot and won’t create another one. Below is an example:

CREATE DATABASE TestDB_Snapshot ON
(
    NAME = TestDB,
    FILENAME = 'C:\TestDB.ss'
) AS SNAPSHOT OF TestDB;
GO
DBCC CHECKDB (TestDB_Snapshot) WITH NO_INFOMSGS;
GO

2015-07-14

Check Dependent Objects

In order to check the dependent objects being referenced by a specified object, e.g. to find the referenced tables of a stored procedure, previously we can used the sp_depends system builtin stored procedure. However, if the dependent objects (e.g. table) are created after the referencing object (e.g. stored procedure), sp_depends cannot find out such dependency. Started from SQL2008, there are two new DMFs (sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities) which overcome such problem. Let's have a try in the following example:

USE TempDB
GO

DROP TABLE TestTable
GO
CREATE TABLE dbo.TestTable
( ID INT,
Name VARCHAR(100))
GO

-- referencing usp1 NOT created yet
DROP PROC usp2
GO
CREATE PROCEDURE dbo.usp2
AS
EXEC dbo.usp1
GO

DROP PROC usp1
GO
CREATE PROCEDURE dbo.usp1
AS
SELECT ID, Name
FROM TestTable
GO

SELECT * FROM sys.dm_sql_referencing_entities ('dbo.usp1', 'OBJECT');
SELECT * FROM sys.dm_sql_referenced_entities ('dbo.usp1', 'OBJECT');
SELECT * FROM sys.dm_sql_referenced_entities ('dbo.usp2', 'OBJECT');

2015-07-03

Get current stored proc name and params list

Inside a stored procedure, it can get the current stored procedure name and its parameters list by the follow statements:
DECLARE @names varchar(MAX) = OBJECT_NAME(@@PROCID) + ' ';
SELECT @names += name + ',' FROM sys.parameters WHERE [object_id] = @@PROCID ORDER BY parameter_id;

2015-06-02

Stored Procedure Error Handling Pattern

Thanks to SQL Server MVP Erland Sommarskog, we got a unified and reliable error and transaction handling in stored procedures. Here I briefly demonstrate how to do.

1. Create the error handler sp:
-- =============================================
-- Author: Erland Sommarskog
-- Description: Error Handler SP
-- Usage: To be called inside CATCH block of a stored proc to reraise error. Error Line Number can be precisely checked by using [sp_helptext] system stored proc., e.g. sp_helptext '[dbo].[the_sp_name]'
-- Reference: General Pattern for Error Handling (http://www.sommarskog.se/error_handling/Part1.html)
-- =============================================
CREATE PROCEDURE error_handler_sp AS
BEGIN
   DECLARE @errmsg   nvarchar(2048),
           @severity tinyint,
           @state    tinyint,
           @errno    int,
           @proc     sysname,
           @lineno   int
         
   SELECT @errmsg = error_message(), @severity = error_severity(),
          @state  = error_state(), @errno = error_number(),
          @proc   = error_procedure(), @lineno = error_line()
     
   IF @errmsg NOT LIKE '***%'
   BEGIN
      SELECT @errmsg = '*** ' + coalesce(quotename(@proc), '<dynamic SQL>') +
                       ', Line ' + ltrim(str(@lineno)) + '. Errno ' +
                       ltrim(str(@errno)) + ': ' + @errmsg
   END
   RAISERROR('%s', @severity, @state, @errmsg)
END
GO

2. Employ the error handling pattern in your stored procs:
/* TESTING TABLE */
CREATE TABLE sometable(a int NOT NULL,
                       b int NOT NULL,
                       CONSTRAINT pk_sometable PRIMARY KEY(a, b))
GO

/* ERROR HANDLING PATTERN */
CREATE PROCEDURE insert_data @a int, @b int AS
BEGIN
   SET XACT_ABORT, NOCOUNT ON
   /*
    * The first line in the procedure turns on XACT_ABORT and NOCOUNT in single statement. This line is the only line to come before BEGIN TRY.
* Everything else in the procedure should come after BEGIN TRY: variable declarations, creation of temp tables, table variables, everything.
* Even if you have other SET commands in the procedure (there is rarely a reason for this, though), they should come after BEGIN TRY.
*/
   BEGIN TRY
     /* Non-transactional statements, e.g. variable declarations */
      BEGIN TRANSACTION /* If your procedure does not perform any updates or only has a single INSERT/UPDATE/DELETE/MERGE statement, you typically don't have an explicit transaction at all. */
 /* Transactional statements */
      INSERT sometable(a, b) VALUES (@a, @b) -- sample
      INSERT sometable(a, b) VALUES (@b, @a) -- sample
      COMMIT TRANSACTION
 /* Non-transactional statements, e.g. final SELECT to return data or assign values to output parameters */
   END TRY
   BEGIN CATCH
      IF @@trancount > 0 ROLLBACK TRANSACTION -- Rolls back any open transaction
      EXEC error_handler_sp -- Reraises the error
      RETURN 55555 -- any error return code you defined on your application (Non-zero, zero is usually understood as success)
   END CATCH /* NEVER have any code after END CATCH for the outermost TRY-CATCH */
END
GO

/* TEST CASES */
-- Outer SP
CREATE PROCEDURE outer_sp @a int, @b int AS
   SET XACT_ABORT, NOCOUNT ON
   BEGIN TRY
      EXEC insert_data @a, @b
   END TRY
   BEGIN CATCH
      IF @@trancount > 0 ROLLBACK TRANSACTION
      EXEC error_handler_sp
      RETURN 55555
   END CATCH
GO
EXEC insert_data 9, NULL
EXEC insert_data 8, 8
EXEC outer_sp 8, 8
EXEC outer_sp null, null
EXEC sp_helptext '[insert_data]' -- check error line number of the error throwing stored proc.

2015-05-18

Update Statistics based on the Percentage of Modifications

As the sp_updatestats nearly update all statistics in your database (it will update a statistic as long as it has had one row modified), I don't recommend this old fashioned method. A new DMF called dm_db_stats_properties was added in SQL Server 2008 R2 SP2 and SQL Server 2012 SP1, we can use this DMF to check the number of modifications for the leading statistics column (the column on which the histogram is built) since the last time statistics were updated. Based on the percentage of modifications (#modifications / #total rows), we can pick out the statistics to update.

DECLARE @modifiedPercent int = 20

SELECT
'UPDATE STATISTICS ' + sch.name + '.' + so.name + ' [' + ss.name + '] WITH FULLSCAN' AS UpdateStatsStmt,
sch.name + '.' + so.name AS TableName,
ss.name AS Statistic,
sp.last_updated AS StatsLastUpdated,
sp.rows AS FilteredRows,
sp.unfiltered_rows AS TotalRows,
sp.rows_sampled AS RowsSampled,
sp.modification_counter AS Modifications
FROM sys.stats AS ss
JOIN sys.objects AS so ON ss.object_id = so.object_id
JOIN sys.schemas AS sch ON so.schema_id = sch.schema_id
CROSS APPLY sys.dm_db_stats_properties (so.object_id, ss.stats_id) AS sp
WHERE so.type = 'U'
AND sp.modification_counter / sp.unfiltered_rows >= @modifiedPercent
ORDER BY TableName

2015-04-17

Quick Solve Unclosed Transaction

The quickest way to find a process that transaction is unclosed and what statements it's running is:

1. DBCC OPENTRAN
Displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions, if any, within the CURRENT Database of this DBCC OPENTRAN running. You can obtain the Process ID.

2. DBCC INPUTBUFFER(@spid)
After you obtained the Process ID from DBCC OPENTRAN, you can run DBCC INPUTBUFFER to get the last statement sent from the client that the process run by passing in the process ID into it.

Example: We will create a bad stored procedure and run it for demonstration purpose.
/* A Bad Stored Proc. */
USE [TestDB]
GO
CREATE PROC [dbo].[UnclosedTran]
@p1 varchar(50)
AS
BEGIN
BEGIN TRAN
UPDATE Test1 SET col1 = @p1
END

/* Run that SP */
USE TestDB
GO
EXECUTE UnclosedTran 'ABC'

/* Find the SPID of oldest process with transaction unclosed */
/* use sp_MSForEachDB to loop through all databases */
EXEC sp_MSForEachDB 'DBCC OPENTRAN(?)'



/* Get the SPID value from the DBCC OPENTRAN output, then see its input buffer */
DBCC INPUTBUFFER(61) /* replace your spid value */


After you take notes of the “EventInfo” (the bad stored procedure / sql script), then you may decide to KILL the process. E.g.
KILL 61 /* replace your spid value */

2015-04-15

How to Add New Table(s) to Transactional Replication WITHOUT Re-Generate the Whole Snapshot

You have transactional replication configured in your production environment. You need to add a new article (table) to the publication. You wish to initialize only the new article added to the publication, in order to avoid taking a total snapshot of all existing articles in the publication. Below steps teach you how to do:

1. Set Publication Properties
USE <DB>
GO
DECLARE @publication sysname = '<publication>'
-- check immediate_sync
EXEC sp_helppublication @publication
-- If immediate_sync = 1, fix it (independent_agent must be 1) by disable immediate_sync
EXEC sp_changepublication
@publication = @publication,
@property = 'allow_anonymous', @value = 'False'
EXEC sp_changepublication
@publication = @publication,
@property = 'immediate_sync', @value = 'False'
--check
EXEC sp_helppublication @publication

2. Add New Table(s) to the Publication's Articles list
Right-click the Publication -> "Properties" -> "Articles" -> Uncheck "Show only checked articles in the list" -> Check the "New Table" in the list -> Press "OK", e.g.


3. Generate mini-Snapshot for the New Table(s)
Right-Click that "Publication" -> "View Snapshot Status" -> Press "Start".
It should only generate 1 article (if only one new table was added) as shown below:



2015-04-10

Analyzing Deadlock using built-in "system _health" Extended Event



Background:
By default, SQL Server automatically starts an Extended Events Session called “system_health” when it start-up. This session collects information includes any deadlocks that are detected. (ref. http://blogs.msdn.com/b/psssql/archive/2008/07/15/supporting-sql-server-2008-the-system-health-session.aspx
Without any extra monitoring and loading, this default enabled “system_health” X-Events Session gives us a way to check the deadlock information, including the locked objects (table/index) and SQL statements/stored procedure calls involved in the deadlock.
This “system_health” events session logs detected events into memory, ring buffer size 4MB, when the buffer fills up it will overwrite the oldest entries.

How to do:
  1. Run the following SELECT statement in production server:
    (ref.
    http://blogs.msdn.com/b/sqlserverfaq/archive/2013/04/27/an-in-depth-look-at-sql-server-memory-part-2.aspx)Use Master
SELECT
       xed.value('@timestamp', 'datetime') as Creation_Date,
       xed.query('.') AS Extend_Event
FROM
(
       SELECT CAST([target_data] AS XML) AS Target_Data
       FROM sys.dm_xe_session_targets AS xt
       INNER JOIN sys.dm_xe_sessions AS xs
       ON xs.address = xt.event_session_address
       WHERE xs.name = N'system_health'
       AND xt.target_name = N'ring_buffer'
) AS XML_Data
CROSS APPLY Target_Data.nodes('RingBufferTarget/event[@name="xml_deadlock_report"]') AS XEventData(xed)
ORDER BY Creation_Date DESC

  1. The result is ordered by the time of the deadlock (latest on top) like following:


  2. Click on the xml inside “Extend_Event” in the result, it will show you the deadlock xml report (the xml tags < and > were transformed to &lt; and &gt;, you can copy the XML into a notepad, and replace them for better view). E.g.
<event name="xml_deadlock_report" package="sqlserver" id="123" version="1" timestamp="2015-04-10T04:52:33.351Z">
<data name="xml_report">
<type name="unicode_string" package="package0" />
<value><deadlock-list>
<victim-list>
<victimProcess id="process586d708"/>
<process-list>
<process id="process586d708" taskpriority="0" logused="432" waitresource="RID: 5:1:144:0" waittime="1625" ownerId="90452" transactionname="user_transaction" lasttranstarted="2015-04-10T12:52:21.713" XDES="0x85a15950" lockMode="U" schedulerid="4" kpid="3624" status="suspended" spid="57" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2015-04-10T12:52:21.710" lastbatchcompleted="2015-04-10T12:51:59.060" clientapp="Microsoft SQL Server Management Studio - Query" hostname="WIN-4VPNBDSGB8D" hostpid="2636" loginname="WIN-4VPNBDSGB8D\Administrator" isolationlevel="read committed (2)" xactid="90452" currentdb="5" lockTimeout="4294967295" clientoption1="673319008" clientoption2="390200">
<executionStack>
<frame procname="" line="14" stmtstart="358" stmtend="428" sqlhandle="0x03000500045a3d02c4b2b60076a400000100000000000000">
</frame>
<frame procname="" line="1" sqlhandle="0x0100050001cf4e1e70280380000000000000000000000000">
</frame>
</executionStack>
<inputbuf>
EXEC uspDeadlockTest_2 2002 </inputbuf>
</process>
<process id="process5852988" taskpriority="0" logused="432" waitresource="RID: 5:1:146:0" waittime="2718" ownerId="90449" transactionname="user_transaction" lasttranstarted="2015-04-10T12:52:20.613" XDES="0x856d7950" lockMode="U" schedulerid="3" kpid="2868" status="suspended" spid="55" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2015-04-10T12:52:20.610" lastbatchcompleted="2015-04-10T12:51:33.770" clientapp="Microsoft SQL Server Management Studio - Query" hostname="WIN-4VPNBDSGB8D" hostpid="2636" loginname="WIN-4VPNBDSGB8D\Administrator" isolationlevel="read committed (2)" xactid="90449" currentdb="5" lockTimeout="4294967295" clientoption1="673319008" clientoption2="390200">
<executionStack>
<frame procname="" line="13" stmtstart="354" stmtend="424" sqlhandle="0x0300050059ed607f6ddab50076a400000100000000000000">
</frame>
<frame procname="" line="1" sqlhandle="0x010005008f2a720ac034ac82000000000000000000000000">
</frame>
</executionStack>
<inputbuf>
EXEC uspDeadlockTest_1 101 </inputbuf>
</process>
</process-list>
<resource-list>
<ridlock fileid="1" pageid="144" dbid="5" objectname="" id="lock82cf8680" mode="X" associatedObjectId="72057594038779904">
<owner-list>
<owner id="process5852988" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="process586d708" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
<ridlock fileid="1" pageid="146" dbid="5" objectname="" id="lock8013dd80" mode="X" associatedObjectId="72057594038845440">
<owner-list>
<owner id="process586d708" mode="X"/>
</owner-list>
<waiter-list>
<waiter id="process5852988" mode="U" requestType="wait"/>
</waiter-list>
</ridlock>
</resource-list>
</deadlock>
</deadlock-list>
</value>
<text />
</data>
</event>

  1. As you can see above, the affected sql statements or stored procedure calls are in the <inputbuf> element. The deadlock victim (process being killed and rolled-back by SQL Server engine) is identified by the process ID in <victimProcess>. The objects (table/index) involved are represented by Resource-IDs in different formats, including TAB (table), PAGE, KEY (index), and RID (row).
    (ref.: https://support.microsoft.com/en-us/kb/224453 – section “Waitresource”)

  1. In order to resolve the waitresource ID into the table/index name, you can run the follow SQL statements.
    (these SQL can also be run in a Non-production sql server with a production database backup image restored on it, better the latest backup, as the object or data-page may not exists if the backup was too old).

The 1st number in all different waitresource ID formats is the db_id, you can check the database name by this SQL:
SELECT DB_NAME(db_id) /* this DB_NAME statement can only be run on production server */

Then you can run the follow SQL for different waitresource formats, by setting the current database as the affected database:
USE <DBName>
GO

For Object ID (OBJECT:db_id:object_id) / Table ID (TAB:db_id:object_id) :
SELECT OBJECT_NAME(object_id);

For HOBT ID (KEY:db_id:hobt_id) :
SELECT o.name AS TableName, i.name AS IndexName, SCHEMA_NAME(o.schema_id) AS SchemaName FROM sys.partitions p JOIN sys.objects o ON p.OBJECT_ID = o.OBJECT_ID JOIN sys.indexes i ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id WHERE p.hobt_id = hobt_id

For Page ID (PAGE:db_id:file_id:page_id) / Row ID (RID:db_id:file_id:page_id:slot) :
DBCC TRACEON(3604)
GO
DBCC PAGE('DBName', file_id, page_id)
The object id will be shown in the result, Metadata: Object_Id = ???
The index id will be shown in the result, Metadata: IndexId = ???
Then you can get the table name and index name by OBJECT_NAME function and sys.indexes DMV.
SELECT OBJECT_NAME(Object_Id)
SELECT name FROM sys.indexes WHERE object_id = object_Id AND index_id = IndexId

2015-04-01

Email Alert of Long Uncommitted Transactions

Long running transaction in database causes performance impact to your system. Process blocking and deadlock will be arisen by lock waits on the resources being acquired by the long running transaction. Transaction log space cannot be reclaimed as there is an active transaction, the transaction log eventually out of space and the database becomes inaccessible (read-only). Also, if you have turned on READ_COMMITTED_SNAPSHOT for a database, SQL Server saves the before image of the data before starting a transaction in tempdb. This is called version store. If a transaction is running for a long time, the version store does not get cleared and it continues to grow filling up tempdb data file.
In order to monitor any long running (uncommitted) transactions in your database, you can create an email alert to send notification to your DBA team using Database Mail. Thanks for the script provided by Paul, I wrote the below script to send such email alert. You can create a SQL Agent Job, paste this script as the job step, and schedule the job to run periodically. (This script uses another stored procedure ConvertTableToHtml to render the result as HTML table, you should create this stored procedure before creating your job).



SET NOCOUNT ON;

-- **** Threshold in seconds ****
DECLARE @timeoutSec int = 15

DECLARE @recipients varchar(max), @subject nvarchar(255), @body nvarchar(max)

-- **** EMAIL RECIPIENTS, semicolon-delimited ; ****
SET @recipients = 'peter.lee@guosen.com.hk'

SET @subject = 'Long Uncommitted Transaction'
SET @body = 'Uncomitted transactions running longer than specified threshold ' + CAST(@timeoutSec AS varchar(10)) + ' seconds.'

DECLARE @htmlTable varchar(max)
CREATE TABLE #t (
[session_id] int,
[Login Name] nvarchar(128),
[Database] nvarchar(128),
[Begin Time] datetime,
[Duration (min.)] varchar(50),
[Last T-SQL Text] nvarchar(max)
)

INSERT #t SELECT
[s_tst].[session_id],
[s_es].[login_name] AS [Login Name],
DB_NAME (s_tdt.database_id) AS [Database],
[s_tdt].[database_transaction_begin_time] AS [Begin Time],
CAST(CAST(DATEDIFF(second, [s_tdt].[database_transaction_begin_time], GETDATE()) / 60.0 AS decimal(9, 1)) AS varchar(50)) AS [Duration (min.)],
[s_est].text AS [Last T-SQL Text]
FROM
sys.dm_tran_database_transactions [s_tdt]
JOIN
sys.dm_tran_session_transactions [s_tst]
ON
[s_tst].[transaction_id] = [s_tdt].[transaction_id]
JOIN
sys.[dm_exec_sessions] [s_es]
ON
[s_es].[session_id] = [s_tst].[session_id]
JOIN
sys.dm_exec_connections [s_ec]
ON
[s_ec].[session_id] = [s_tst].[session_id]
LEFT OUTER JOIN
sys.dm_exec_requests [s_er]
ON
[s_er].[session_id] = [s_tst].[session_id]
CROSS APPLY
sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est]
WHERE DATEDIFF(second, [s_tdt].[database_transaction_begin_time], GETDATE()) > @timeoutSec
AND s_tdt.database_id <> DB_ID('msdb') -- EXCLUDE msdb system database
ORDER BY
[Begin Time] ASC;

DECLARE @rowcount bigint
SELECT @rowcount = COUNT(*) FROM #t

IF @rowcount > 0
BEGIN
--Sending Mail
EXEC master.dbo.ConvertTableToHtml 'SELECT * FROM #t', @htmlTable OUTPUT
SET @body = @body + '<br/>' + @htmlTable
EXEC msdb.dbo.sp_send_dbmail
@recipients = @recipients,
@body = @body,
@body_format = 'HTML',
@subject = @subject,
@importance = 'High';
END

DROP TABLE #t



Sample result:

Be aware that this script excluded the msdb database, as SQL Server uses this system database to run some background tasks.

2015-03-27

Transaction Log Disk Full due to Transactional Replication Not CatchUp

Problem Description:
When there are some bulk data operation (e.g. bulk data insert/update, create index/reindex, program bug leads to huge amount of updated rows) on a database publishing transactional replication, log reader agent may not catch up the bulk data operation. Transaction log of the affected DB cannot be truncated even log backup taken, until the log reader agent catch up the updated data. (Ref. http://support.microsoft.com/kb/317375 - section: Unreplicated transactions).


Emergency Resolution:
1. Assign extra transaction log space from another disk (if available, to buy time);
2. Stop the Log Reader Agent on the DB in Management Studio, by Right-Click the affected Publications, View Log Reader Agent Status, Click Stop;
3. Clear the pending replication commands in transaction log:
EXEC DB..sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1;
4. Drop all publications in DB:
EXEC DB..sp_droppublication 'all';
5. Disable Replication Publisher role of DB:
EXEC sp_replicationdboption 'DB', 'publish', 'false';
6. Check DB transaction log truncation NOT blocked by 'REPLICATION' again:
SELECT name AS DB, log_reuse_wait_desc FROM master.sys.databases;
7. Backup Transaction Log again to free space;
8. DBCC SHRINKFILE to shrink transaction log file (if required).

2015-03-25

You reduced a column size, but the table gets bigger, why?

Choosing correct data types could decrease the row size, and also improve performance. Sometime you may give the column too much space during table creation, and after the database already production running for a while, you would like to reduce the defined size for a column that well fit your system requirement. For example, you defined a column as fixed length nchar(50), but later you identify that using variable length nvarchar(10) is good enough.

For example, you created the table like this:
CREATE TABLE TestTbl (
id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
col nchar(50) NULL
)
GO

The initial size, surely zero.


In order to simulate the production usage of this table, let's populate it with 100000 rows.
DECLARE @i int = 1
WHILE @i <= 100000
BEGIN
INSERT TestTbl (col) VALUES ('XXX')
SET @i += 1
END
GO

The table is populated with data.


As you identify that this column can use smaller defined size, you alter the column definition as below:
ALTER TABLE TestTbl ALTER COLUMN col nvarchar(10)

Now the column defined size is reduced, from fixed length nchar(50) to variable length nvarchar(10). But when you check the table size, surprisingly, it becomes bigger than before.


Why? Unfortunately one thing is not commonly known – alteration of the table never decreases the row size. When you drop the column, SQL Server removes column from the metadata but does not reclaim/rebuild the row. When you change column type from int to tinyint, for example, actual storage size remains intact – SQL Server just checks new value on insert/update stages. When you increase the size of the field (for example change int to bigint), SQL Server creates another bigint column and keep old int column space intact.
So how to fix it? Well, you need to rebuild clustered index. SQL Server will reclaim all space and rebuild the rows when you do that. By the way, clustered index rebuild is time consuming operation which locks the table. You can only do that in your system maintenance window.

Let's try to rebuild the clustered index:
ALTER INDEX [PK__TestTbl__3213E83F6E978ECC] ON TestTbl REBUILD

Now the table size is reduced finally!


The truth is, you should always carefully choose the type and length of every table column during the design phase.

2015-03-23

SQL Server 32-bit or 64-bit version?

You can use two different commands to check your SQL Server is 32-bit or 64-bit.

@@VERSION
It returns system and build information for the current installation of SQL Server.

SELECT @@VERSION

Sample result:
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Express Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)

In the first line of the result, X86 = 32-bit, and x64 = 64-bit. Same is true for operating system.

SERVERPROPERTY('Edition')
The SERVERPROPERTY system function returns property information about the server instance. The 'Edition' property is the product edition of the instance of SQL Server. 64-bit versions of the Database Engine append (64-bit) to the version.

SELECT SERVERPROPERTY('Edition')

Sample result:
Express Edition (64-bit)

Hope this helps.

2015-03-18

Repairing a Corrupted Database

One of the most common reasons behind database corruption is collision with any third-party software. Virus attack or bug infection can also corrupt the files. A hardware fault in your system or a crash in your hard disk drive (HDD) may cause the database files to become corrupt. Or, if the database files are being stored in a compressed volume or folder, this may cause corruption in the database files too. So it should be avoided to store SQL Server database files in compressed volumes or folders.

Consider this scenario:
You have been working in a SQL Server database from last few days. One day you find that the database status is tagged as suspect, which means the database file is corrupted. Or, you are having problem while connecting to the database. So how to fix it?
According to the Microsoft KB "How to troubleshoot database consistency errors reported by DBCC CHECKB", the best solution to fix database consistency errors is to restore from a known good backup. However, if you cannot restore from a backup, then you can try DBCC CHECKDB to repair the error. Below are the steps:

1. DBCC CHECKDB(DBName) WITH NO_INFOMSGS
The DBName is a name of your corrupted database. If this is completed without any errors then the database does not need to be repaired.

2. ALTER DATABASE DBName SET SINGLE_USER WITH ROLLBACK IMMEDIATE
The database must be set in single user mode before repairing it.

3. DBCC CHECKDB(DBName, REPAIR_REBUILD)
There are number of repair model, you should first try REPAIR_REBUILD, which is no data loss. If OK, go to step 5.e (multi-user mode) If not, go to next step.

4. DBCC CHECKDB(DBName, REPAIR_ALLOW_DATA_LOSS)
This may cause data loss. If ok go to step 5.e (multi-user mode) If not, go to next step.

5.
a. ALTER DATABASE DBName SET EMERGENCY
b. ALTER DATABASE DBName SET SINGLE_USER
c. DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS
d. ALTER DATABASE DBName SET ONLINE
e. ALTER DATABASE DBName SET MULTI_USER

However, if you tried all the steps above but still unable to repair the database corruption, you can try a more powerful third-party SQL database recovery software - Stellar Phoenix SQL Database Repair