2026-03-02

Filtered Index can lead to a reduction in query performance

This month, I faced a compelling performance challenge that I am eager to share. My client indicated that one of their older data cleanup tasks is executing very slowly. Upon reviewing the Query Store and the table indexes, I discovered something quite intriguing.

Below, the image depicts the execution plan obtained from the query store.



At its essence, the query is simple; it consists of a DELETE statement with a WHERE clause that filters the rows to be deleted based on an AccountId column, which is already indexed as non-clustered.

Interestingly, the query execution plan is not just about using the nonclustered index on AccountId to retrieve the qualified rows (visible on the far right of the plan graph); it also involves a Key Lookup on the clustered index of the table.

Also, there is a suggestion for a missing index that suggests adding a duplicate index while including the ToBeDeleted column. This seems strange, as the DELETE query does not refer to any elements from the ToBeDeleted column. 

Let’s examine the Key Lookup operator more closely; it claims to be used for retrieving the ToBeDeleted column, which is not included in the nonclustered index at the rightmost top operator.


Indeed, a filtered index is available on that table, specifically filtered by the ToBeDeleted column, as shown below.


The filtered index features a key column called AccountId, filtered based on the ToBeDeleted column, yet it lacks any INCLUDED columns.

The following blog post clarifies the need for an extra Key Lookup in the query and advises incorporating the filtered column into either the key column or the included column.

In my opinion, it is important to assess the actual usefulness of a filtered index before adding it, ensuring that it can significantly enhance your query performance. The filtering value should only be a constant, as sometimes its effectiveness can be overestimated.

2026-02-02

SQL Server terminated unexpectedly because of Memory exhaustion and Oracle Linked Server

This month, I want to discuss an incident involving one of my clients, where their production SQL server was unexpectedly terminated. 


Moments before the SQL Server was terminated unexpectedly, an EXCEPTION_ACCESS_VIOLATION error was raised by a stored procedure call, as shown below:


The Stack Dump generated by SQL Server just before the service was shut down indicates that the exception is related to the Oracle Provider for OLE DB (OraOLEDB), as shown below:


Employing the WinDbg tool to analyze the minidump (.mdmp file) produced by the SQL Server upon crashing also indicates the error caused by OracleOLEDB, as shown below:


With the Oracle Linked Server Provider option set to "Allow inprocess" as depicted in the image below, an exception in the linked server provider may lead to a crash of the SQL Server.
(Ref.: Create Linked Servers - SQL Server | Microsoft Learn
SQL Server service crashes when you run an Oracle linked server query - SQL Server | Microsoft Learn)


To address this issue, I lowered the SQL Server maximum memory limit from the original 95% to 85%. The server has a total memory of 1.25TB, and the system administrator at my client's company believed that reserving 5% (64GB) for the Windows OS was sufficient for a server dedicated to SQL Server. However, he failed to consider that the Oracle OLEDB Linked Server Provider also consumes memory; as more concurrent SQL sessions invoke the Oracle Linked Server query, memory usage increases. After I adjusted the SQL Server maximum memory, allowing more memory for the Oracle OLEDB provider, the problem was resolved.

2026-01-05

TempDB LOGBUFFER and IO_COMPLETION Waits

We observed numerous SQL sessions that were waiting for LOGBUFFER and IO_COMPLETION; all of these sessions were executing an INSERT INTO #Temp table, as illustrated below.


Upon examining the count of Virtual Log Files (VLFs) in the tempdb .ldf file, we discovered that the tempdb transaction log was significantly fragmented.


Subsequently, we executed defragmentation of the VLFs within the TempDB transaction log file, using the SQL commands provided below:
use tempdb
DBCC SHRINKFILE ( 'templog', 0, TRUNCATEONLY)
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = 'templog', SIZE = 150000MB )

The VLFs in the tempdb log file have been decreased to 32, as shown below:

We also observed that the DB Full Backup job was planned to run at that time. Thus, we rescheduled the backup job to operate during off-peak hours.

We found that the maximum memory allocation for SQL Server was set to 75% of the overall server memory, which we think was a result of the SQL Server installer GUI's suggestions.
Next, we increased the SQL max memory limit to 1216GB, representing 95% of the total 1.25TB, on the server, which is chiefly assigned to SQL server services. It should be adequate to reserve 64GB of RAM for the Windows operating system and other various tasks.

Finally, we relocated the tempdb transaction log file to a separate disk that offers improved write speed.