SQL Server allows you to backup your database or transaction log in a striped media set. A striped set is a set of disk files on which backup data is divided into blocks and distributed in a fixed order. Most database administrators overlook this feature. It can be used to speed up the backup process by distributing the backup workload into multiple storage devices.
Below scripts demonstrate how to make a striped backup set:
BACKUP DATABASE [StackOverflow2013]
TO
DISK='D:\DbBackup\StackOverflow2013_1.bak',
DISK='D:\DbBackup\StackOverflow2013_2.bak',
DISK='D:\DbBackup\StackOverflow2013_3.bak',
DISK='D:\DbBackup\StackOverflow2013_4.bak'
WITH STATS = 10
GO
I only used the same drive D:\ on above script, but you can specify different drives for each DISK.
Here is the result:
Below script demonstrate how to restore from a striped backup:
RESTORE DATABASE [StackOverflow2013]
FROM
DISK='D:\DbBackup\StackOverflow2013_1.bak',
DISK='D:\DbBackup\StackOverflow2013_2.bak',
DISK='D:\DbBackup\StackOverflow2013_3.bak',
DISK='D:\DbBackup\StackOverflow2013_4.bak'
WITH STATS = 10
GO
The major downside of striped backup is that if one
backup file is corrupted, you cannot restore it.
2019-11-19
2019-10-15
Auditing Stored Procedure Execution
In the previous blog post, I demonstrated how to use SQL Server Audit feature to keep track of login and logout events. This post shows how to keep track of stored procedure executions. Below script creates the audit:
USE [master]
GO
CREATE SERVER AUDIT [Audit_SP_Exec]
TO FILE
( FILEPATH = N'D:\SqlAudit' -- *** Modify the File Path ***
,MAXSIZE = 50 MB
,MAX_ROLLOVER_FILES = 2000
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 5000
,ON_FAILURE = CONTINUE
)
GO
USE [TestDB] -- *** your application DB ***
GO
CREATE DATABASE AUDIT SPECIFICATION [DbAuditSpec_SP_Exec]
FOR SERVER AUDIT [Audit_SP_Exec]
ADD (EXECUTE ON OBJECT::[dbo].[uspTest] BY [public]),
ADD (EXECUTE ON OBJECT::[dbo].[uspTest2] BY [public])
GO
ALTER DATABASE AUDIT SPECIFICATION [DbAuditSpec_SP_Exec] WITH (STATE = ON)
GO
USE [master]
GO
ALTER SERVER AUDIT [Audit_SP_Exec] WITH (STATE = ON)
GO
Then you can view the audit log by right-click the server Audit in SSMS like below:
USE [master]
GO
CREATE SERVER AUDIT [Audit_SP_Exec]
TO FILE
( FILEPATH = N'D:\SqlAudit' -- *** Modify the File Path ***
,MAXSIZE = 50 MB
,MAX_ROLLOVER_FILES = 2000
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 5000
,ON_FAILURE = CONTINUE
)
GO
USE [TestDB] -- *** your application DB ***
GO
CREATE DATABASE AUDIT SPECIFICATION [DbAuditSpec_SP_Exec]
FOR SERVER AUDIT [Audit_SP_Exec]
ADD (EXECUTE ON OBJECT::[dbo].[uspTest] BY [public]),
ADD (EXECUTE ON OBJECT::[dbo].[uspTest2] BY [public])
GO
ALTER DATABASE AUDIT SPECIFICATION [DbAuditSpec_SP_Exec] WITH (STATE = ON)
GO
USE [master]
GO
ALTER SERVER AUDIT [Audit_SP_Exec] WITH (STATE = ON)
GO
Then you can view the audit log by right-click the server Audit in SSMS like below:
2019-09-17
How to Audit Login and Logout for specific logins
Starting from SQL Server 2008, we can use SQL Server Audit feature to record numerous actions occurred on our SQL Server. SQL Server Audit uses Extended Events under the hood, which impose lesser loading than the plain old SQL Trace.
In this blog post, I would like to show you how to audit login and logout events for specific logins, which can only be done by setting a filter on the Server Audit.
USE [master]
GO
CREATE SERVER AUDIT [Audit_sysadm_login]
TO FILE
( FILEPATH = N'D:\SqlAudit\'
,MAXSIZE = 10 MB
,MAX_FILES = 100
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 5000
,ON_FAILURE = CONTINUE
)
WHERE ([server_principal_name]='sa' OR [server_principal_name]='peter.lee')
GO
CREATE SERVER AUDIT SPECIFICATION [ServerAuditSpec_sysadm_login]
FOR SERVER AUDIT [Audit_sysadm_login]
ADD (SUCCESSFUL_LOGIN_GROUP),
ADD (LOGOUT_GROUP)
WITH (STATE = ON)
GO
ALTER SERVER AUDIT [Audit_sysadm_login] WITH (STATE = ON)
GO
* Please be reminded that ON_FAILURE is a critical argument on the CREATE SERVER AUDIT statement. If your SQL Server service online is more important than the auditing needs, set this argument to CONTINUE.
The simplest way to check the audit log is using the Log File Viewer, which can be opened in Object Explorer > Security > Audits > right-click the audit > View Audit Logs.
In this blog post, I would like to show you how to audit login and logout events for specific logins, which can only be done by setting a filter on the Server Audit.
USE [master]
GO
CREATE SERVER AUDIT [Audit_sysadm_login]
TO FILE
( FILEPATH = N'D:\SqlAudit\'
,MAXSIZE = 10 MB
,MAX_FILES = 100
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 5000
,ON_FAILURE = CONTINUE
)
WHERE ([server_principal_name]='sa' OR [server_principal_name]='peter.lee')
GO
CREATE SERVER AUDIT SPECIFICATION [ServerAuditSpec_sysadm_login]
FOR SERVER AUDIT [Audit_sysadm_login]
ADD (SUCCESSFUL_LOGIN_GROUP),
ADD (LOGOUT_GROUP)
WITH (STATE = ON)
GO
ALTER SERVER AUDIT [Audit_sysadm_login] WITH (STATE = ON)
GO
* Please be reminded that ON_FAILURE is a critical argument on the CREATE SERVER AUDIT statement. If your SQL Server service online is more important than the auditing needs, set this argument to CONTINUE.
The simplest way to check the audit log is using the Log File Viewer, which can be opened in Object Explorer > Security > Audits > right-click the audit > View Audit Logs.
2019-08-23
Disabling and Re-Enabling SQL Logins
During system maintenance, as a production DBA, you mostly need to deploy stored procedure changes and data patches on the production database. While you are apply those changes, you would like to avoid any users and applications to access the database, otherwise program errors will occur and logical data integrity will be broken. You can use below SQL script to disable non-sysadmin sql logins before you apply changes, and re-enable them after your maintenance tasks completed.
DECLARE @sysadmins table ([name] sysname);
INSERT @sysadmins SELECT prin2.[name]
FROM sys.server_principals prin
JOIN sys.server_role_members mem ON prin.principal_id = mem.role_principal_id
JOIN sys.server_principals prin2 ON prin2.principal_id = mem.member_principal_id
WHERE prin.[type] = 'R' and prin.[name] = N'sysadmin';
--SELECT * FROM @sysadmins;
DECLARE @appLogins table (id int IDENTITY, [name] sysname);
INSERT @appLogins SELECT [name] FROM sys.server_principals
WHERE [type] = 'S' and [name] <> 'sa' AND [name] NOT IN (SELECT [name] FROM @sysadmins) AND [name] NOT LIKE '##%' AND [name] NOT LIKE '%\%';
SELECT * FROM sys.server_principals WHERE [name] IN (SELECT [name] FROM @appLogins);
DECLARE @cnt int, @loginName varchar(100);
SELECT @cnt = COUNT(*) FROM @appLogins;
WHILE @cnt <> 0
BEGIN
SELECT @LoginName = [Name] FROM @appLogins WHERE id = @cnt;
PRINT 'ALTER LOGIN [' + @LoginName + '] DISABLE;';
PRINT 'ALTER LOGIN [' + @LoginName + '] ENABLE;';
--EXEC('ALTER LOGIN [' + @LoginName + '] DISABLE;');
--EXEC('ALTER LOGIN [' + @LoginName + '] ENABLE;');
SET @cnt = @cnt - 1;
END
SELECT * FROM sys.server_principals WHERE [name] IN (SELECT [name] FROM @appLogins);
DECLARE @sysadmins table ([name] sysname);
INSERT @sysadmins SELECT prin2.[name]
FROM sys.server_principals prin
JOIN sys.server_role_members mem ON prin.principal_id = mem.role_principal_id
JOIN sys.server_principals prin2 ON prin2.principal_id = mem.member_principal_id
WHERE prin.[type] = 'R' and prin.[name] = N'sysadmin';
--SELECT * FROM @sysadmins;
DECLARE @appLogins table (id int IDENTITY, [name] sysname);
INSERT @appLogins SELECT [name] FROM sys.server_principals
WHERE [type] = 'S' and [name] <> 'sa' AND [name] NOT IN (SELECT [name] FROM @sysadmins) AND [name] NOT LIKE '##%' AND [name] NOT LIKE '%\%';
SELECT * FROM sys.server_principals WHERE [name] IN (SELECT [name] FROM @appLogins);
DECLARE @cnt int, @loginName varchar(100);
SELECT @cnt = COUNT(*) FROM @appLogins;
WHILE @cnt <> 0
BEGIN
SELECT @LoginName = [Name] FROM @appLogins WHERE id = @cnt;
PRINT 'ALTER LOGIN [' + @LoginName + '] DISABLE;';
PRINT 'ALTER LOGIN [' + @LoginName + '] ENABLE;';
--EXEC('ALTER LOGIN [' + @LoginName + '] DISABLE;');
--EXEC('ALTER LOGIN [' + @LoginName + '] ENABLE;');
SET @cnt = @cnt - 1;
END
SELECT * FROM sys.server_principals WHERE [name] IN (SELECT [name] FROM @appLogins);
2019-07-18
Solving Last-Page Insert Latch Contention by Index Partitioning
According to a Microsoft KB, if you have an even-increasing index on a table, such as IDENTITY or GETDATE(), then during huge amount of concurrent queries/threads inserting new rows into that table, different threads will contend for exclusive latch (PAGELATCH_EX) on the last data page of that index in the buffer pool.
Let's discuss more details about what happens internally in SQL Server when a query reading and writing pages in memory. When SQL Server accesses a page in the buffer pool, the page access must be synchronized among multiple worker threads. Every time a query read a page, its worker thread has to acquire a shared latch (PAGELATCH_SH), and every time a query write a page, its worker thread has to acquire an exclusive latch (PAGELATCH_EX). When you run an INSERT statement, the worker thread exclusively latches the page, in the meantime no any other concurrent worker threads can read and write from/to this page. With an ever-increasing index, parallel INSERT queries are contending about an exclusive latch on the same last page in that index, illustrated as below:
In this blog post, I'm going to demonstrate how to solve this problem by partitioning the index. Starting from SQL Server 2016 SP1, index partitioning feature is available on all editions.
Let's create the demo table first, with an identity primary key:
CREATE TABLE Ticket (
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
colA varchar(50)
);
By using the SQLQueryStress tool created by Adam Mechanic, you can simulate multiple concurrent INSERT statements into the demo table:
You can see there are PAGELATCH_EX wait on some sessions:
SELECT wait_type, wait_resource, COUNT(1) cnt FROM sys.dm_exec_requests GROUP BY wait_type, wait_resource ORDER BY cnt DESC;
The wait resource in this case is 6:1:134489, meaning database id 6, file 1 and page 134489. You can check that page to determine what it is by using the DBCC PAGE command:
DBCC TRACEON(3604);
GO
DBCC PAGE(6, 1, 134489, 0);
GO
The ObjectId is 146099561. You can get the actual object name using the OBJECT_NAME function, which gives you the Ticket table you just created:
In order to minimize the PATCHLATCH_EX contention on the last page, we can use partitioning with a computed column. Let’s do this:
-- suppose you have 4 CPU cores in your SQL Server
CREATE PARTITION FUNCTION pf_hash (tinyint) AS RANGE LEFT FOR VALUES (0, 1, 2, 3);
GO
CREATE PARTITION SCHEME ps_hash AS PARTITION pf_hash ALL TO ([PRIMARY]);
GO
DROP TABLE Ticket;
GO
CREATE TABLE Ticket (
id int IDENTITY(1, 1) NOT NULL,
colA varchar(50),
hashId AS CONVERT(tinyint, ABS(id % 4)) PERSISTED NOT NULL,
CONSTRAINT PK_Ticket PRIMARY KEY CLUSTERED (id, hashId)
) ON ps_hash(hashId);
GO
Let's check the table is partitioned:
SELECT DISTINCT
table_name = O.[name],
partition_index_name = I.[name],
partition_index_type = I.[type_desc],
partition_scheme = PS.[name],
partition_function = PS.[name]
FROM sys.partitions P
JOIN sys.objects O ON O.[object_id] = P.[object_id]
JOIN sys.indexes I ON P.[object_id] = I.[object_id] AND P.index_id = I.index_id
JOIN sys.data_spaces DS ON I.data_space_id = DS.data_space_id
join sys.partition_schemes PS on DS.data_space_id = PS.data_space_id
JOIN sys.partition_functions PF on PS.function_id = PF.function_id;
Now run the stress test again, you will the PAGELATCH_EX wait becomes fewer than before.
Let's discuss more details about what happens internally in SQL Server when a query reading and writing pages in memory. When SQL Server accesses a page in the buffer pool, the page access must be synchronized among multiple worker threads. Every time a query read a page, its worker thread has to acquire a shared latch (PAGELATCH_SH), and every time a query write a page, its worker thread has to acquire an exclusive latch (PAGELATCH_EX). When you run an INSERT statement, the worker thread exclusively latches the page, in the meantime no any other concurrent worker threads can read and write from/to this page. With an ever-increasing index, parallel INSERT queries are contending about an exclusive latch on the same last page in that index, illustrated as below:
In this blog post, I'm going to demonstrate how to solve this problem by partitioning the index. Starting from SQL Server 2016 SP1, index partitioning feature is available on all editions.
Let's create the demo table first, with an identity primary key:
CREATE TABLE Ticket (
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
colA varchar(50)
);
By using the SQLQueryStress tool created by Adam Mechanic, you can simulate multiple concurrent INSERT statements into the demo table:
SELECT wait_type, wait_resource, COUNT(1) cnt FROM sys.dm_exec_requests GROUP BY wait_type, wait_resource ORDER BY cnt DESC;
The wait resource in this case is 6:1:134489, meaning database id 6, file 1 and page 134489. You can check that page to determine what it is by using the DBCC PAGE command:
DBCC TRACEON(3604);
GO
DBCC PAGE(6, 1, 134489, 0);
GO
The ObjectId is 146099561. You can get the actual object name using the OBJECT_NAME function, which gives you the Ticket table you just created:
In order to minimize the PATCHLATCH_EX contention on the last page, we can use partitioning with a computed column. Let’s do this:
-- suppose you have 4 CPU cores in your SQL Server
CREATE PARTITION FUNCTION pf_hash (tinyint) AS RANGE LEFT FOR VALUES (0, 1, 2, 3);
GO
CREATE PARTITION SCHEME ps_hash AS PARTITION pf_hash ALL TO ([PRIMARY]);
GO
DROP TABLE Ticket;
GO
CREATE TABLE Ticket (
id int IDENTITY(1, 1) NOT NULL,
colA varchar(50),
hashId AS CONVERT(tinyint, ABS(id % 4)) PERSISTED NOT NULL,
CONSTRAINT PK_Ticket PRIMARY KEY CLUSTERED (id, hashId)
) ON ps_hash(hashId);
GO
Let's check the table is partitioned:
SELECT DISTINCT
table_name = O.[name],
partition_index_name = I.[name],
partition_index_type = I.[type_desc],
partition_scheme = PS.[name],
partition_function = PS.[name]
FROM sys.partitions P
JOIN sys.objects O ON O.[object_id] = P.[object_id]
JOIN sys.indexes I ON P.[object_id] = I.[object_id] AND P.index_id = I.index_id
JOIN sys.data_spaces DS ON I.data_space_id = DS.data_space_id
join sys.partition_schemes PS on DS.data_space_id = PS.data_space_id
JOIN sys.partition_functions PF on PS.function_id = PF.function_id;
Now run the stress test again, you will the PAGELATCH_EX wait becomes fewer than before.
2019-06-18
Lock Escalation is evaluated per Statement only, NOT per Session
Lock Escalation is an internal optimization process that SQL Server converts many fine-grained locks such as row/page locks into a single coarse-grained table/partition lock, in order to release memory held by the many fine-grained locks. When SQL Server is going to modify a large number of rows, it is more efficient for the database engine to take fewer, larger locks instead of dealing with a large number of individual locks. In this blog post, I'm going to explain that SQL Server evaluating lock escalation is per each statement, rather than per session. Let's take a look at the below demonstration:
USE AdventureWorks2008R2
GO
BEGIN TRAN
DELETE TOP(5000) FROM [Sales].[SalesOrderHeader];
-- Lock Escalation to Table Level
SELECT resource_type,resource_subtype,request_mode,COUNT(*)
FROM sys.dm_tran_locks WHERE request_session_id = @@SPID
GROUP BY resource_type,resource_subtype,request_mode;
ROLLBACK
GO
BEGIN TRAN
DECLARE @loop int = 1;
WHILE @loop < 10
BEGIN
DELETE TOP(1000) FROM [Sales].[SalesOrderHeader];
SET @loop += 1;
END
-- Lock Escalation NOT occurred!
SELECT resource_type,resource_subtype,request_mode,COUNT(*)
FROM sys.dm_tran_locks WHERE request_session_id = @@SPID
GROUP BY resource_type,resource_subtype,request_mode;
ROLLBACK
Below is the result:
As you can see, the first batch deletes 5000 rows in one statement, key and page locks are escalated into one table lock. The second batch deletes 10k rows by using a while loop, each statement just deletes 1000 rows, so no any locks being escalated.
2019-05-06
Tracking Schema Changes using DDL Triggers
The simplest way to check the schema changes history of a database is to check the Schema Changes History report , which can be opened in SSMS through right-click you SQL Server > Reports > Standard Reports > Schema Changes History. This built-in report shows the changes
you’ve made using Data Definition Language (DDL) statements, relying on the Default Trace mechanism. (SQL Server records certain events about your system
in a "Default Trace", which is a binary file that watches SQL Server
for certain events. It uses the SQL Trace mechanism indeed). But the built-in report has some weaknesses. It "rolls over", so the changes history isn't persisted forever. It doesn't show the DDL statements ran. Also it just shows the impersonated login (using EXECUTE AS statement) rather than the original login.
You can track the schema change better by implementing DDL Triggers yourself. Below is the script.
CREATE DATABASE AuditDB;
GO
USE AuditDB;
GO
CREATE TABLE DDLEvent (
eventDate datetime NOT NULL DEFAULT GETDATE(),
eventType nvarchar(255),
eventDDL nvarchar(max),
eventXML xml,
databaseName nvarchar(255),
schemaName nvarchar(255),
objectName nvarchar(255),
hostName nvarchar(255),
ipAddress nvarchar(255),
programName nvarchar(255),
loginName nvarchar(255)
);
GO
CREATE CLUSTERED INDEX CX_DDLEvent ON DDLEvent (eventDate)
GO
-- Modified it to your DB name
USE YourDB;
GO
CREATE OR ALTER TRIGGER DDLTrigger_AuditSchemaChange
ON DATABASE
FOR CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE, CREATE_TRIGGER, ALTER_TRIGGER, DROP_TRIGGER, CREATE_TABLE, ALTER_TABLE, DROP_TABLE, RENAME
AS
BEGIN
SET NOCOUNT ON;
DECLARE @eventData xml = EVENTDATA();
DECLARE @ip nvarchar(255) = CONVERT(nvarchar(255), CONNECTIONPROPERTY('client_net_address'));
INSERT AuditDB.dbo.DDLEvent (
eventType, eventDDL, eventXML, databaseName, schemaName, objectName, hostName, ipAddress, programName, loginName
) SELECT
@eventData.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(255)'),
@eventData.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(max)'),
@eventData,
DB_NAME(),
@eventData.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(255)'),
@eventData.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(255)'),
HOST_NAME(),
@ip,
PROGRAM_NAME(),
ORIGINAL_LOGIN()
;
END
GO
GO
USE AuditDB;
GO
CREATE TABLE DDLEvent (
eventDate datetime NOT NULL DEFAULT GETDATE(),
eventType nvarchar(255),
eventDDL nvarchar(max),
eventXML xml,
databaseName nvarchar(255),
schemaName nvarchar(255),
objectName nvarchar(255),
hostName nvarchar(255),
ipAddress nvarchar(255),
programName nvarchar(255),
loginName nvarchar(255)
);
GO
CREATE CLUSTERED INDEX CX_DDLEvent ON DDLEvent (eventDate)
GO
-- Modified it to your DB name
USE YourDB;
GO
CREATE OR ALTER TRIGGER DDLTrigger_AuditSchemaChange
ON DATABASE
FOR CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE, CREATE_TRIGGER, ALTER_TRIGGER, DROP_TRIGGER, CREATE_TABLE, ALTER_TABLE, DROP_TABLE, RENAME
AS
BEGIN
SET NOCOUNT ON;
DECLARE @eventData xml = EVENTDATA();
DECLARE @ip nvarchar(255) = CONVERT(nvarchar(255), CONNECTIONPROPERTY('client_net_address'));
INSERT AuditDB.dbo.DDLEvent (
eventType, eventDDL, eventXML, databaseName, schemaName, objectName, hostName, ipAddress, programName, loginName
) SELECT
@eventData.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(255)'),
@eventData.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(max)'),
@eventData,
DB_NAME(),
@eventData.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(255)'),
@eventData.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(255)'),
HOST_NAME(),
@ip,
PROGRAM_NAME(),
ORIGINAL_LOGIN()
;
END
GO
By default, once you create a DDL trigger, it will be enabled immediately. Then you can check the audit table using SELECT statement, as illustrated below.
Subscribe to:
Posts (Atom)