Foliotek Developer Blog

Index Recommendations from the Database Engine Tuning Adviser should not be accepted blindly

The Problem

I ran a SQL Profiler session on one of our SQL databases toda to catch any expensive queries and see if they could be optimized. One query that showed up was one that selected the top 30 rows from one table where a certain bit column was set to true, sorted by oldest first. The table being queried had close to 100,000 rows in it. Each query took about 6,000 reads.

The query looked something like this:

 SELECT TOP (30) *  
 FROM TheTable  
 WHERE BitColumn = 1  
 ORDER BY DateColumn  

The Suggested Solution

My inclination was that all I needed to do was add a very small index on the column of type BIT being used in the WHERE clause. However, I went ahead and ran the query through the Database Engine Tuning Adviser, just to see what it came up with.

The index recommendation suggested an index on three columns, one of which was, in fact, the one in the WHERE clause. The Date column was also included along with the table’s Primary ID. The estimated improvement was 99%, which was great and what I had expected.

However, the suggested index also had every other table column in its INCLUDE statement. The result was that the new index was projected to take up 68.5 MB of space. That’s quite a bit of space for an index on a bit column.

The Employed Solution

Consequently, I manually went in to SQL Server Management Studio and just added a simple index on the column used in the WHERE clause. Then I checked the size of the new index and found that it was only 1.75 MB–quite significantly less than 68.5 MB.

The query now requires only 14 reads, compared with the nearly 6,000 it used before, and I haven’t added 68 MB of index storage to the database.

Final Remark

So, there are certainly times where the recommendations from the Database Tuning Adviser would probably be fine. Each case is different, of course. In this case, adding 68 MB of index storage just was not worth it. Adding a simple index manually gave the optimization we needed without really impacting the size of the database. Of course, 68 MB is not really a lot, but adding that 10 or 20 or 50 times when it is not necessary would certainly begin to unnecessarily impact it.

So, the Database Tuning Adviser is definitely helpful, but should probably be used more as a guide rather than as something not to be questioned.

An Obscure Performance Pitfall for Test Accounts and Improperly Indexed Database Tables

I recently came across a very problematic pitfall that test accounts could easily fall into and be completely unaware of the issue for quite some time. ?The pitfall has to do with improperly indexed tables, but test accounts that reference some of the first rows in the tables.

Test accounts for system testers are often some of the first accounts that are created in a system, and appear, therefore, very early in the user tables. ?So, when a test account logs in, the user table or other related table could be very poorly indexed, causing the database to revert to a table search (reading all rows) rather than using an appropriate index. ?However, since the test account is one of the first accounts, the performance is actually pretty good. ?The reads are very low as well, so this issue can go on undetected for quite awhile, even though real user experience could be pretty bad.

An inevitable complaint will come in. ?The system testers will test by logging in themselves, perhaps from home or coffee shops or at the slow connection at their parents’ house. ?Yet, they find that the system does pretty well–nothing like what the user called to complain about. ?It’s easy to chalk up the one user’s experience to a bad connection, a virus-filled computer, or the old PEBCAK issue, since the issue seems to be unrepeatable.

I came across this in our system recently after aggregating some SQL Server Profiler results. ?A particular table that had been used quite some time ago, but is no longer even available for newer users due to better options, was showing a substantial amount of reads per query. ?The system was checking that table to see if the logged in user had created anything using this old functionality. ?For a test user that had actually used it long ago, the system was able to find a match very quickly, even though I realized that the table had not been indexed on the User ID. ?Yet, since the rows for the test account were pretty early, the query did not take too long at all. ?However, for newer users, the system ended up reading the entire table every time to see if there were any matches, but never finding any since the functionality was no longer available to new users.

So, the Pitfall was that the test accounts were created early on and were not representative of the newest user. *In fact, the experience was quite a bit different. ?My specific test showed that an early test account could retrieve the appropriate row in *7 reads, while a user with no matches generated *6,465 *reads–every time. ?This isn’t a whole lot compared to other complex queries, but when it’s aggregated for every user that logs in, it actually can take a good deal of resources and is a completely unnecessary usage of the database.

Creating the index on User ID reduced the amount reads for a query with no matches down to just 8. ?Aggregated across a lot of users, this was a significant reduction in total reads.

So, this is a pretty obscure pitfall. ?After realizing what was going on, it seemed pretty obvious, but it certainly was not obvious before.

My recommendation, then, is to make sure new test accounts are created periodically, so that the test experience actually represents the experience of newer users to the system.

SQL Server Optimization Script with Index Rebuilding to Reduce Fragmentation


I recently completed the SQL Server 2008 Essential Training on and learned about the Standard Reports feature in SQL Server Management Studio. ?From looking at these reports, I observed that several of our indexes on various databases had become quite fragmented and were spread out over sometimes thousands of (memory) pages.

I searched for some information online and was directed to a query that would identify indexes that were 30% or more fragmented and rebuild them all. ?I noticed, though, that some indexes on very small tables weren’t seeing any improvement, so I added another parameter to filter out indexes with a small amount of pages.

The results were pretty significant. ?One database size was actually reduced by over 30%, since the fragmented indexes were taking up so much memory and were so inefficiently stored.

SQL Server Standard Reports

If you haven’t seen the SQL Standard Reports before, do the following:

  • Right click on your database in SQL Server Management Studio
  • Mouse over “Reports” and then “Standard Reports”
  • Select a report.

The two reports I found most useful were “Index Physical Statistics” *and *“Disk Usage by Top Tables”. ?On the “Index Physical Statistics” report, you will see recommendations for each index that will say 1) nothing, 2) reorganize, or 3) rebuild. ?The reorganize is apparently quicker and, from what I saw, more geared towards slightly fragmented indexes. ?The rebuild is for more highly fragmented ones. ?Of particular note on this report is the number of pages the indexes are taking up. When it numbers in the thousands with a high percentage of fragmentation, it really needs to be rebuilt and will probably yield some significant improvements in performance and memory requirements.

On the “Disk Usage by Top Tables” report you can see which tables in your database are taking up the most space, which as the most records, and how much memory is being allocated towards storage for indexes. ?**From what I could tell, the storage for the primary index seems to be included in the Data (KB) column rather than the Indexes (KB) column.

*NOTE: *Before you start rebuilding any indexes, I would recommend that you do the following for comparison purposes:

  • Save these reports (or export them), so you can tell how significantly the rebuild improved your database by reducing fragmentation and database size.
  • Write a fairly comp

Also, if your indexes are highly fragmented and you don’t know when they were last rebuilt, it may be better to begin with higher values for the fragmentation and page count, so you don’t make your database inaccessible for too long. ?You might start at 95% and 1000 pages and then just walk down bit by bit.

Index Rebuild Script

So, here’s the script I ran to identify and rebuild fragmented indexes with a high page count (you can adjust the fragmentation percentage and page count as desired). I definitely borrowed most of this from somewhere else, but now I can’t find where that was. My apologies for that.


– Ensure a USE statement has been executed first.


DECLARE @objectid int;  
 DECLARE @indexid int;  
 DECLARE @partitioncount bigint;  
 DECLARE @schemaname nvarchar(130);  
 DECLARE @objectname nvarchar(130);  
 DECLARE @indexname nvarchar(130);  
 DECLARE @partitionnum bigint;  
 DECLARE @partitions bigint;  
 DECLARE @frag float;  
 DECLARE @pagecount int;  
 DECLARE @command nvarchar(4000);

DECLARE @page_count_minimum smallint  
 SET @page_count_minimum = 50

DECLARE @fragmentation_minimum float  
 SET @fragmentation_minimum = 30.0

– Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function  
 – and convert object and index IDs to names.

SELECT object_id AS objectid,  
 index_id AS indexid,  
 partition_number AS partitionnum,  
 avg_fragmentation_in_percent AS frag,  
 page_count AS page_count  
 INTO #work_to_do  
 FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, ‘LIMITED’)  
 WHERE avg_fragmentation_in_percent > @fragmentation_minimum  
 AND index_id > 0  
 AND page_count > @page_count_minimum;

– Declare the cursor for the list of partitions to be processed.  
 DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;

– Open the cursor.  
 OPEN partitions;

– Loop through the partitions.  
 WHILE (1=1)  
 FROM partitions  
 INTO @objectid, @indexid, @partitionnum, @frag, @pagecount;


SELECT @objectname = QUOTENAME(, @schemaname = QUOTENAME(  
 FROM sys.objects AS o  
 JOIN sys.schemas as s ON s.schema_id = o.schema_id  
 WHERE o.object_id = @objectid;

SELECT @indexname = QUOTENAME(name)  
 FROM sys.indexes  
 WHERE object_id = @objectid AND index_id = @indexid;

SELECT @partitioncount = count (*)  
 FROM sys.partitions  
 WHERE object_id = @objectid AND index_id = @indexid;

SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';

IF @partitioncount > 1  
 SET @command = @command + N’ PARTITION=’ + CAST(@partitionnum AS nvarchar(10));

EXEC (@command);

PRINT N’Rebuilding index ‘ + @indexname + ‘ on table ‘ + @objectname;  
 PRINT N’ Fragmentation: ‘ + CAST(@frag AS varchar(15));  
 PRINT N’ Page Count: ‘ + CAST(@pagecount AS varchar(15));  
 PRINT N’ ‘;  

– Close and deallocate the cursor.  
 CLOSE partitions;  
 DEALLOCATE partitions;

– Drop the temporary table.  
 DROP TABLE #work_to_do;  


SQL Server Agent Job

In order to prevent my indexes from ever getting too fragmented again, I set up a job under SQL Server Agent that runs once a month. ?The frequency that is best for your database will depend on its size and how fast it grows.

Final Comparison

After you’ve set all this up, don’t forget to run the Standard Reports again and compare them with the reports you ran before you did anything. ?You can also rerun the query ?you wrote and see if you can detect any improvement.

OK. ?That’s it. ?Good luck.

For more tips os optimizing SQL Server database queries, see A Beginner’s Guide to SQL Server Database Optimization

Script to identify indexes that should be rebuilt.

P.S. To simply identify indexes that should be rebuilt, run just a portion of the script:

– This query identifies indexes on a database that need to be rebuilt.  
 – The recommendation is to rebuilt queries whose fragmentation exceeds 30.0%  
 – and who page count exceeds 50 pages. These parameters may be adjusted below.

DECLARE @page_count_minimum smallint  
 DECLARE @fragmentation_minimum float

SET @page_count_minimum = 50  
 SET @fragmentation_minimum = 30

SELECT AS Table_Name, AS Index_Name,  
 avg_fragmentation_in_percent AS frag,  
 page_count AS page_count,  
 sys.dm_db_index_physical_stats.object_id AS objectid,  
 sys.dm_db_index_physical_stats.index_id AS indexid,  
 partition_number AS partitionnum  
 FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, ‘LIMITED’)  
 inner join sys.objects  
 on sys.objects.object_id = sys.dm_db_index_physical_stats.object_id  
 inner join sys.indexes  
 on sys.indexes.index_id = sys.dm_db_index_physical_stats.index_id  
 and sys.indexes.object_id = sys.dm_db_index_physical_stats.object_id  
 WHERE avg_fragmentation_in_percent > @fragmentation_minimum  
 AND sys.dm_db_index_physical_stats.index_id > 0  
 AND page_count > @page_count_minimum  
 ORDER BY page_count DESC

For more tips on managing a SQL Server Database, see Managing a SQL Server Database: Tips for Beginners