For example, you created the table like this:
CREATE TABLE TestTbl (
id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
col nchar(50) NULL
)
GO
id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
col nchar(50) NULL
)
GO
The initial size, surely zero.
In order to simulate the production usage of this table, let's populate it with 100000 rows.
DECLARE @i int = 1
WHILE @i <= 100000
BEGIN
INSERT TestTbl (col) VALUES ('XXX')
SET @i += 1
END
GO
WHILE @i <= 100000
BEGIN
INSERT TestTbl (col) VALUES ('XXX')
SET @i += 1
END
GO
The table is populated with data.
As you identify that this column can use smaller defined size, you alter the column definition as below:
ALTER TABLE TestTbl ALTER COLUMN col nvarchar(10)
Now the column defined size is reduced, from fixed length nchar(50) to variable length nvarchar(10). But when you check the table size, surprisingly, it becomes bigger than before.
Why? Unfortunately one thing is not commonly known – alteration of the table never decreases the row size. When you drop the column, SQL Server removes column from the metadata but does not reclaim/rebuild the row. When you change column type from int to tinyint, for example, actual storage size remains intact – SQL Server just checks new value on insert/update stages. When you increase the size of the field (for example change int to bigint), SQL Server creates another bigint column and keep old int column space intact.
So how to fix it? Well, you need to rebuild clustered index. SQL Server will reclaim all space and rebuild the rows when you do that. By the way, clustered index rebuild is time consuming operation which locks the table. You can only do that in your system maintenance window.
Let's try to rebuild the clustered index:
ALTER INDEX [PK__TestTbl__3213E83F6E978ECC] ON TestTbl REBUILD
Now the table size is reduced finally!
The truth is, you should always carefully choose the type and length of every table column during the design phase.
No comments:
Post a Comment