2025-03-04

Side-by-side SQL Server Always-On cluster upgrade using Log Shipping

 Log Shipping 1st Run:

  1. Transfer SQL Server logins from the old SQL Server 2012&2016 to the new SQL Server 2022 primary and secondary servers.
  2. Migrate SQL Server Agent jobs from the old SQL Server 2012&2016 to the new SQL Server 2022 primary and secondary servers.
  3. Transfer Linked Servers (if any) from the old SQL Server 2012$&2016 to the new SQL Server 2022 primary and secondary servers.
  4. Disable existing Transaction-Log Backup jobs on the old SQL Server 2012&2016 to prevent interference with Log Shipping.
  5. Establish a shared folder as the log shipping backup file destination path, ensuring it has read-write permissions for the SQL service account and is accessible by the new SQL Server 2022 servers.
  6. Configure Log Shipping from the primary AG replica of the old SQL Server 2012&2016 to both the primary and secondary AG replicas of the new SQL Server 2022:
    1. Perform a compressed full database backup on the primary AG replica of the old SQL Server 2012&2016.
    2. Copy the backup file to both the primary and secondary AG replicas of the new SQL Server 2022.
    3. Restore the backups on the new SQL Server 2022 servers using the WITH NORECOVERY option.
    4. Enable Log Shipping on the databases in the primary AG replica of the old SQL Server 2012/&2016.
    5. Configure the log shipping jobs, selecting the created shared folder as the backup path.
    6. Add both the primary and secondary AG replicas of the new SQL Server 2022 as the Log Shipping destination servers.
    7. Verify the Log Shipping status using SQL Server 2022 standard reports --> Transactional Log Shipping Status.
    8. Repeat the above steps for each application database.

Bring SQL Server 2022 Database Online for Application Testing:
  1. Disable Log Shipping on the old SQL Server 2012&2016.
  2. Re-enable existing log backup jobs on the old SQL Server 2012&2016 that were disabled during the first run of Log Shipping.
  3. Bring databases online in the primary replica of the new SQL Server 2022 (RESTORE LOG <db> WITH RECOVERY).
  4. Configure Always-On Availability Group between the primary and secondary replicas of the new SQL Server 2022.

Log Shipping 2nd Run:
  1. Disable existing Transaction-Log Backup jobs on the old SQL Server 2012&2016 to prevent interference with Log Shipping.
  2. Remove the Always-On Availability Group in the new SQL 2022 servers.
  3. Configure Log Shipping from the primary AG replica of the old SQL Server 2012&2016 to both the primary and secondary AG replicas of the new SQL Server 2022:
    1. Perform a compressed full database backup on the primary AG replica of the old SQL Server 2012&2016.
    2. Copy the backup file to both the primary and secondary AG replicas of the new SQL Server 2022.
    3. Restore the backups on the new SQL Server 2022 servers using the WITH NORECOVERY option.
    4. Enable Log Shipping on the databases in the primary AG replica of the old SQL Server 2012&2016.
    5. Configure the log shipping jobs, selecting the created shared folder as the backup path.
    6. Add both the primary and secondary AG replicas of the new SQL Server 2022 as the Log Shipping destination servers.
    7. Verify the Log Shipping status using SQL Server 2022 standard reports --> Transactional Log Shipping Status.
    8. Repeat the above steps for each application database.

Switch-over:
  1. Terminate all client connections to the old SQL Server 2012&2016 by disabling SQL logins.
  2. Execute LS-Backup jobs on the old SQL Server 2012&2016.
  3. Execute LS-Copy and LS-Restore jobs on the new SQL Server 2022 servers.
  4. Disable Log Shipping on the old SQL Server 2012&2016.
  5. Bring databases online in the primary replica of the new SQL Server 2022 (RESTORE LOG <db> WITH RECOVERY).
  6. Configure Always-On Availability Group between the primary and secondary replicas of the new SQL Server 2022.
  7. Estimated switch-over time: 15 minutes.

2025-02-14

Capturing T-SQL Anti-Pattern using Extended Event

SQL Server 2022 introduces the query_antipattern Extended Event (XEvent), identifying anti-patterns in SQL queries sent to the server. An anti-pattern here is code that the SQL Server optimizer struggles to optimize effectively but cannot fix automatically. The description for the event (from sys.dm_xe_objects dmv) states that it “occurs when a query antipattern is present and can potentially affect performance.” Queries with antipatterns can severely impact performance by increasing resource consumption.

Including this event in a XEvent session helps us identify potential application issues early in the SDLC (Software Development Life Cycle). If a query contains certain antipatterns, they are detected during optimization. When this happens, and the query_antipattern event is part of an active XEvent session, the output will capture relevant fields. This allows for quick identification of queries that need tuning.

Query optimization antipatterns captured by the query_antipattern event include:

  • Column data type conversion prevents index seek (implicit type conversion)
  • Non-optimal OR clauses
  • Large IN clauses
  • Many OR predicates

Let's have an example and see how it works.

1. Open SSMS > connect to a SQL Server 2022 > Management > Extended Events > right-click Sessions > New Session...

2. Enter a Session Name, you can also tick the checkboxes of Start the event session at server startup and Start the event session immediately after session creation.

3. Go to "Events" page, under Event library enter "antipattern", then click on the "query_antipattern" row to highlight it, you will see the description and the event fields of this event.

4. Click the right arrow ">" button to add the query_antipattern event to the Selected events list, click on it, then click the Configure button.


5. In the "Global fields (Actions)" tab, tick the checkboxes of "plan_handle" and "sql_text".


6. Go to "Data Storage" page, choose "event file" type, enter the file path and file name, set maximum file size and rollover. Click OK button to create the event session.


7. The session is created. Right-click the event file under the created session, click View Target Data...


8. You can see there are some events being captured. Click on an event to show the details. Under the Details, you can see the antipattern type (a "local" event field), plan_handle and sql_text (both are Global fields). 


9. Double-click on plan_handle to open a new window, click Copy All button to copy the plan handle.


10. Open a new query window, run the following query with the plan_handle value:

-- Get Execution plan by plan handle
USE master
GO
SELECT DB_NAME(dbid), * FROM sys.dm_exec_query_plan (<plan_handle value>);

11. Inside the Results tab, click on the query_plan XML to show the graphical execution plan.

Notes:
If you run the exact same query literal, it will not generate a new antipattern event. This is because an antipattern event is detected during query optimization rather than execution. 

2025-01-06

Azure Data Studio Free Extension - Schema Compare

Before Azure Data Studio, SQL Server developers could only use third party SQL comparison tool such as Redgate SQL Compare to compare and synchronize SQL Server database schemas between two databases either in the same server instance or remote instances. Starting from Azure Data Studio (ADS), there is a FREE ADS extension Schema Compare provides an easy-to-use way to compare two database definitions and apply the differences from the source to the target. This includes active database connections, dacpac files, and SQL projects.

After this extension is enabled, right-click the source database under CONNECTIONS > SERVERS, then choose Schema Compare.

Click the [...] button besides Target to select the target database:

Enter the Target Server name and Database, then press OK.


Press the Compare button to start the schema comparison:

After the comparison is completed, the comparison result will be shown. Then you can press the "Generate script" button to generate the deployment script.

2024-12-04

Create ER Diagram for Azure SQL Database

If you are the sysadmin or db_owner, you can create ER diagram by using SSMS Database Diagram Designer by referencing below:

Create a New Database Diagram - Visual Database Tools | Microsoft Learn

Understand Database Diagram Ownership - Visual Database Tools | Microsoft Learn

Set Up Database Diagram Designer - Visual Database Tools | Microsoft Learn

But if you are a SQL developer, mostly you are not a member of the db_owner role nor sysadmin, and then you will not pass the checking as illustrated below:

Also, SQL Azure Databases do not support diagrams in the same way 'normal' on premise SQL Servers do, so you should look at workarounds or third-party tools. In this blogpost, I will show you a free third-party tool that can be used to create ER diagrams. DBeaver Community is a free and open-source universal database tool for developers and database administrators. DBeaver ER diagram designer doesn't require database owner permission, as the ER diagram being created by DBeaver will be saved as a .erd file in your desktop (rather than in the database that SSMS database diagrams do). Below screenshot shows a DBeaver ER diagram:

Creating ER diagram in DBeaver is intuitive, you will find it is much easier than SSMS. 

2024-11-07

Query to get all databases restore history

During a side-by-side database migration, mostly like you will backup all the user databases from the source SQL Server, copy the backup files to the destination server (which may be a newer version when you are doing an version upgrade), and finally restore all the databases on the destination SQL Server. Below query is handy for you to verify this migration by checking the latest restore history of all the databases in an instance.

;WITH CTE AS (
    SELECT ROW_NUMBER() OVER (PARTITION BY destination_database_name ORDER BY restore_date DESC) AS row_num, * FROM msdb.dbo.restorehistory
) SELECT d.[name], CTE.restore_date, CTE.restore_type, BS.backup_start_date, BS.server_name, BMF.physical_device_name
FROM master.sys.databases AS D
LEFT JOIN CTE ON D.[name] = CTE.destination_database_name AND CTE.row_num = 1
LEFT JOIN msdb.dbo.backupset AS BS ON CTE.backup_set_id = BS.backup_set_id
LEFT JOIN msdb.dbo.backupmediafamily AS BMF ON BS.media_set_id = BMF.media_set_id
ORDER BY D.[name];

This query should be run on the destination SQL Server. It returns all databases including system databases and user databases, the date and time of their latest restore operation, type of restore operation (D = Full, I = Differential, L = Log), date and time of the backup being made, the source SQL Server name, and the physical name of the backup device (the backup file) being used. For example:


By checking the result of this query, you can make sure you got the right backup files to restore on the destination SQL Server.

2024-10-09

Handling DDL on Change Data Capture (CDC) enabled Table

SQL Server does not automatically track certain DDL changes in the Change Data Capture log after CDC was enabled on the table. This includes adding new columns to a table after CDC was enabled for the table. To make the CDC log reflects the DDL changes on the source table, it must disable CDC and re-enable CDC on the CDC enabled table. Here comes an example:

USE master

GO

-- Create a database

CREATE DATABASE DbTestCDC;

GO

USE DbTestCDC;

GO


-- Create a table

CREATE TABLE TableTestCDC

(

    ID int NOT NULL PRIMARY KEY,

    FirstName varchar(30),

    LastName varchar(30),

    Email varchar(50)

);


-- Enable CDC for the current database

EXEC sys.sp_cdc_enable_db;

GO

SELECT name, is_cdc_enabled FROM sys.databases;

GO


-- Enable CDC for the specified table in the current database

EXEC sys.sp_cdc_enable_table

@source_schema = N'dbo',

@source_name   = N'TableTestCDC',

@role_name     = NULL,

@supports_net_changes = 1

GO


-- Check CDC enabled on that table

EXEC sys.sp_cdc_help_change_data_capture @source_schema = 'dbo', @source_name = 'TableTestCDC';


-- Check CDC retention (default 4320 minutes = 3 days)

SELECT DB_NAME(database_id), * FROM msdb.dbo.cdc_jobs;


-- Testing

INSERT INTO TableTestCDC Values (1, 'Jorge', 'Ramos', 'ramos@yahoo.com');

INSERT INTO TableTestCDC Values (2, 'Andrew', 'Strudwick', 'andrew@yahoo.com');

INSERT INTO TableTestCDC Values (3, 'Thomas', 'Tuchel', 'tuchel@gmail.com');

DELETE FROM TableTestCDC WHERE ID = 1;

UPDATE TableTestCDC SET LastName = 'ABC', FirstName = 'XYZ' WHERE ID = 2;

DELETE FROM TableTestCDC WHERE ID = 3;

SELECT * FROM TableTestCDC;

GO


WAITFOR DELAY '00:00:05';

GO


-- Query CDC data

DECLARE @from_lsn binary(10), @to_lsn binary(10);

SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_TableTestCDC');

SET @to_lsn   = sys.fn_cdc_get_max_lsn();

SELECT sys.fn_cdc_map_lsn_to_time([__$start_lsn]) AS changeTime,

CASE [__$operation] WHEN 1 THEN 'delete' WHEN 2 THEN 'insert' WHEN 3 THEN 'before update' WHEN 4 THEN 'after update' END AS operation,

*

FROM cdc.fn_cdc_get_all_changes_dbo_TableTestCDC(@from_lsn, @to_lsn, N'all update old');

GO

-- ALTER TABLE on the source table, e.g. add a new column

ALTER TABLE dbo.TableTestCDC ADD NewCol varchar(50) NULL;

GO


-- Disable CDC on that table, and then re-enable CDC to reflect the table DDL changes

EXECUTE sys.sp_cdc_disable_table   

@source_schema = N'dbo',

@source_name   = N'TableTestCDC',

    @capture_instance = N'all';

GO

EXEC sys.sp_cdc_enable_table

@source_schema = N'dbo',

@source_name   = N'TableTestCDC',

@role_name     = NULL,

@supports_net_changes = 1

GO

EXEC sys.sp_cdc_help_change_data_capture @source_schema = 'dbo', @source_name = 'TableTestCDC';


-- Testing (with New column)

INSERT INTO TableTestCDC Values (4, 'Jorge', 'Ramos', 'ramos@yahoo.com', 'NewCol 1');

INSERT INTO TableTestCDC Values (5, 'Andrew', 'Strudwick', 'andrew@yahoo.com', 'NewCol 2');

INSERT INTO TableTestCDC Values (6, 'Thomas', 'Tuchel', 'tuchel@gmail.com', 'NewCol 3');

DELETE FROM TableTestCDC WHERE ID = 4;

UPDATE TableTestCDC SET LastName = 'ABC', FirstName = 'XYZ', NewCol = 'New Col X' WHERE ID = 5;

DELETE FROM TableTestCDC WHERE ID = 6;

SELECT * FROM TableTestCDC;

GO

-- Query CDC data

DECLARE @from_lsn binary(10), @to_lsn binary(10);

SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_TableTestCDC');

SET @to_lsn   = sys.fn_cdc_get_max_lsn();

SELECT sys.fn_cdc_map_lsn_to_time([__$start_lsn]) AS changeTime,

CASE [__$operation] WHEN 1 THEN 'delete' WHEN 2 THEN 'insert' WHEN 3 THEN 'before update' WHEN 4 THEN 'after update' END AS operation,

*

FROM cdc.fn_cdc_get_all_changes_dbo_TableTestCDC(@from_lsn, @to_lsn, N'all update old');

GO



2024-09-25

Get the Last Known Actual Execution Plan

SQL Server 2019 introduces sys.dm_exec_query_plan_stats new DMV which returns the equivalent of the last known actual execution plan for a previously cached query plan (All plans are estimated plans. "Actual" plan is essentially = Estimated Plan + Runtime Metrics). You can turn this new feature on at the server level using trace flag 2451 or at database level with LAST_QUERY_PLAN_STATS a new database scoped configuration. For example:

USE [<DB_Name>];

GO

ALTER DATABASE SCOPED CONFIGURATION SET LAST_QUERY_PLAN_STATS = ON;

GO

Then you can query the last known actual execution plan likes below:

SELECT cp.objtype, DB_NAME(st.dbid) AS DB, OBJECT_NAME(st.objectid, st.dbid) AS ObjectName, st.[text], qps.query_plan

FROM sys.dm_exec_cached_plans AS cp

CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st

CROSS APPLY sys.dm_exec_query_plan_stats(plan_handle) AS qps

--WHERE OBJECT_NAME(st.objectid, st.dbid) = '<stored proc/function name>'

;

In the result, you can click on the query_plan XML to show the graphical plan.

As you see, the plan includes actual runtime metrics.